About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
November 30, 2009
Now here's an oddity: medicinal chemists are used to seeing the two enantiomers (mirror image compounds, for those outside the field) showing different activity. After all, proteins are chiral, and can recognize such things - in fact, it's a bit worrisome when the enantiomers don't show different profiles against a protein target.
There are a few cases known where the two enantiomers both show some kind of activity, but via different binding modes. But I've never seen a case like this, where this happens at the same time in the same binding pocket. The authors were studying inhibitors of a biosynthetic enzyme from Burkholderia, and seeing the usual sorts of things in their crystal structures - that is, only one enantiomer of a racemic mixture showing up in the enzyme. But suddenly of their analogs showed both enantiomers simultaneously, each binding to different parts of the active site.
Interestingly, when they obtained crystal structures of the two pure enantiomers, the R compound looks pretty much exactly as it does in the two-at-once structure, but the S compound flips around to another orientation, one that it couldn't have adopted in the presence of the R enantiomer. The S compound is tighter-binding in general, and calorimetry experiments showed a complicated profile as the concentration of the two compounds was changed. So this does appear to be a real effect, and not just some weirdo artifact of the crystallization conditions.
The authors point out that many other proteins have binding sites that are large enough to permit this sort of craziness (P450 enzymes are a likely candidate, and I'd add PPAR binding sites to the list, too). We still do an awful lot of in vitro testing using racemic mixtures, and this makes a person wonder how many times this behavior has been seen before and not understood. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays
November 28, 2009
I asked recently for suggestions on the best books on med-chem topics, and a lot of good ideas came in via the comments and e-mail. Going over the list, the most recommended seem to be the following:
For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.
Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson.
Case histories of successful past projects are found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.
Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery.
For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.
Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.
In a related field, the standard desk reference for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons. Since all of us make a fair number of poisons (as we eventually discover), it's worth a look.
There's a first set - more recommendations will come in a following post (and feel free to nominate more worthy candidates if you have 'em).
+ TrackBacks (0) | Category: Book Recommendations | Drug Development | Life in the Drug Labs | Pharmacokinetics | The Scientific Literature | Toxicology
November 25, 2009
I'll have the Grand Recommended Med-Chem Book List up later today, but otherwise, blogging will be light over the next few days, what with Thanksgiving and all. A very happy feast to my readers who are celebrating, and hey, those of you in other countries, feel free to enjoy yourselves, too!
+ TrackBacks (0) | Category: Blog Housekeeping
November 24, 2009
I first published this recipe on the blog a couple of years ago, and I'd like to put it out there again for those readers who will be celebrating Thanksgiving this week. This is a slightly modified version of Craig Claiborne's recipe in the New York Times Cookbook
. He was a Southerner himself, so he knew his pecan pie. Substitutions for the ingredients are listed after the recipe:
Melt 2 squares (2 oz.) baking chocolate with 3 tablespoons (about 43g) butter in a microwave or double boiler. Combine 1 cup (240 mL) corn syrup and 3/4 cup sugar (150g) in a saucepan and bring to boil for 2 minutes, then mix the melted chocolate and butter into it. Meanwhile, in a large bowl, beat three eggs, then slowly add the chocolate mixture to them, stirring vigorously (you don't want to cook them with the hot chocolate goop).
Add one teaspoon (5 mL) of vanilla, and mix in about 1 1/2 cups of broken-up pecans, which I think should be about 150g. You can push that to nearly two cups and still get the whole mixture into a deep-dish pie shell, and I recommend going heavy on the nuts, since the pecan/goop ratio is one thing that distinguishes a home-made pie. Bake for about 45 minutes at 375 F (190C), and let cool completely before you attack it. Note that this product has an extremely high energy density - it's not shock-sensitive or anything, but make the slices fairly small.
Note for non-US readers: the baking chocolate can be replaced by 40 grams of cocoa powder (not Dutch-processed) and 28 grams of some sort of shortening (unsalted butter, vegetable shortening, oil, etc.) If you don't have corn syrup, then just use a total of 350g white sugar, and add 60 mL water to the recipe.
+ TrackBacks (0) | Category: Blog Housekeeping
A comment to yesterday's post made a point that seemed instantly familiar, but it's one that my own thoughts had never quite put together. All of us who do medicinal chemistry came out of academic labs; that's where you get the degrees you need to have to be hired. Many of us worked on the synthesis of complex molecules for those degrees, since that's traditionally been a preferred base for drug companies to hire from. (You get a lot of experience of different kinds of reactions that way, have to deal with setbacks and adversity, and have to learn to think for yourself. Plus, if you can put up with some of the people who do natural products synthesis, the thinking goes, you can put up with anything).
Here's the interesting part, though. People who do the glass-filament spiderweb-sculpture work that is total natural product synthesis will defend it on many grounds (some more defensible than others, in my view). They have, naturally enough, a bias in favor of that kind of work. But have those of us who've done that kind of chemistry and then moved on to industry ended up with the opposite bias? Have we reacted against the forced-march experience of some of our early training by resolving never to get stuck in such a situation again (which is reasonable), but at the same time resolved never to get stuck doing fancy synthesis again?
That one may not be so reasonable. And I don't mean that we avoid twenty-step syntheses for irrational reasons, because there are perfectly rational reasons for fleeing from such things in industrial work. But this bias might extend further. Take a workhorse reaction like palladium-catalyzed coupling - that's just what people tend to think of when they think of uninspiring industrial organic synthesis, two or three lumpy heteroaromatics stuck together with Suzuki couplings, yawn. One of my colleagues, though, recently mentioned that he saw too many people sticking with rather primitive conditions for such reactions and taking their 50% yields (and cleanup problems) as just the normal course of events. And he's got a point, I'd say. There really are better conditions to use as your default Pd coupling mixture than the ones from the mid-1990s. You don't have to always clean all the red-brown gunk out from your product after using (dppf) as your phosphine ligand, and good ol' tetrakis is not always the reagent of choice. But a lot of people just take the standard brew, throw their starting materials in there, and bang 'em together. Crank up the microwave some more if it doesn't work.
I can see how this happens. After all, the big point that people have to learn when they join a drug research effort is that chemistry is not an end in itself - it's a tool to make compounds for another end entirely. If you're just making analogs in the early stages of a new project, no one's going to care much if your yields are low, because the key thing is that you made the compounds. I've said myself (many times) that there are two yield in medicinal chemistry: enough, and not enough. Often, perhaps a little too often, five milligrams qualifies as "enough", which means that you can check off a box through some really brutal chemistry.
But at the same time, if you could make simple changes to your reaction conditions, or to the kinds of reactions you tend to run, you could potentially make more compounds (because you're not spending so much time cleaning them up), make them in higher yields (or make your limited amount of starting material stretch further), or make more interesting (and patentable) ones, too. I think that too many of us do tend to get stuck in synthetic ruts of various sorts.
Perhaps the main cause of this is the pressure of normal drug discovery work. But I do have to wonder if some of the problem is a bit of aversion to the latest, hottest reagent or technique coming out of the academic labs. To be sure, a lot of that stuff isn't so useful out here in what it pleases us to call the real world. But there are a lot of things we could stand to learn, as well. Palladium couplings used to be considered kind of out-there, too, you know. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Life in the Drug Labs
November 23, 2009
While I'm putting up odd chemical structures today, I thought I'd add this one, Alasmontamine A, from the latest Organic Letters preprint stream. Natural products scare me:
Anyone who wants to take a crack at this one synthetically, you just go right ahead without me. It is pretty much a dimer, though, so it's only about half as awful as it looks. Which is still enough. It doesn't seem to have much biological activity, but if you can sell it as something to do with green chemistry, nanotech, or alternative energy, you should be able to round up some money, right?
+ TrackBacks (0) | Category: Chemical News
You know, I often think that I have too narrow a view of what kinds of structures can go into drug molecules. (That may come as worrisome statement for some past and present colleagues of mine, who feel that my tolerances are already set a bit too wide!) But I do have limits; there are some structures that I just wouldn't make on purpose, and which I wouldn't submit for testing even if I made them by accident.
Surely ozonides fall into this category. But when I put the "Things I Won't Work With" stamp on them, at least as far as making them on scale and actually isolating them, some readers pointed out that people were investigating them for antimalarial activity. And here we are, with a new paper in J. Med. Chem. on their activity and properties.
Arterolane is the lead compound, which is in Phase III trials as a combination therapy. And it has to be one of the funkier structures ever to make it as far as Phase III, for sure, with both an ozonide and an adamantane in it. Those two, in fact, sort of cancel each other out - the steric hindrance of the adamantane is surely one of the things that makes the ozonide decide not to explode, as its smaller and more footloose chemical relatives would. You get blood levels of the stuff after oral dosing, a useful (although not especially long) half-life, and no show-stopping toxicity.
Endoperoxides are already known as antimalarials, thanks to the natural product arteminisin, which has led to two synthetic derivatives used as antimalarials. So the step to ozonides was, structurally, a small one, but must have been rather larger psychologically. And that's definitely not something to discount. I probably wouldn't have made compounds of this sort, and it's unnerving (even to me) that arterolane has gone further into the clinic than anything I've ever made. I have to congratulate the people who had the imagination to pursue these things.
+ TrackBacks (0) | Category: Drug Development | Infectious Diseases | Odd Elements in Drugs
November 20, 2009
I'm home today (sick children, etc.), so I'm blogging from next to my daughter's guinea pig cage rather across the hall from my lab. But I have a lab-based question to throw out: what would you say is the chemistry technique or reagent with the worst publication-to-real use ratio?
I have a couple of nominees to get things rolling. For reagent, I would like to advance the montmorillonite clay stuff. I cannot count how many papers I have seen on its use as a Lewis acid, catalyst, and all-around good thing to have, but I have never used it myself, never spoken with anyone who has, and never (to my recollection) heard it suggested as a possible thing to try when someone encountered a synthetic problem. For all I know it's a fine reagent, but its footprint does not seem to be very large. I actually have used benzotriazole, but I've never seen an actual container of montmorillonite K-10.
For general technique, I'm tempted to nominate ionic liquids. Man, are there ever a lot of publications on those things, but again, I've never actually encountered them in actual practice. I have heard second-hand of people trying them, so I guess that counts for something, but it still seems to be disproportionate compared to the avalanche of literature citations for the things. The craze seems to have peaked, but still not a week goes by that I don't see a paper.
Nominations? As with the book recommendation post, I'll assemble things into master lists.
+ TrackBacks (0) | Category: Life in the Drug Labs
So, according to this report, Merck is scouting out locations for a UK facility. No word if it's supposed to have a research component, but. . .as a correspondent points out, if only there were a large research campus that they could somehow get their hands on, convenient to both Cambridge and London, with all the facilities they might need. . .hmmm. . .
+ TrackBacks (0) | Category: Business and Markets
November 19, 2009
I get regular requests to recommend books on various aspects of medicinal chemistry and drug development. And while I have a few things on my list, I'm sure that I'm missing many more. So I wanted to throw this out to the readership: what do you think are the best places to turn? This way I can be more sure of pointing people in the right directions.
I'm interested in hearing about things in several categories - best introductions and overviews of the field (for people just starting out), as well as the best one-stop references for specific aspects of drug discovery (PK, toxicology, formulations, prodrugs, animal models, patent issues, etc.)
Feel free to add your suggestions in the comments, or e-mail them to me. I'll assemble the highest-recommended volumes into a master list and post that. Just in time for the holidays, y'know. . .
+ TrackBacks (0) | Category: Life in the Drug Labs | Pharma 101
The InVivo Blog has a good article on a controversy in the blood-thinning market. Plavix (clopidogrel) has a very strong share of that, of course, but since Effient (prasugrel) was finally approved, Lilly and Dai-Ichii are looking to take as much of that market as they can. And one opening might be that not everyone responds similarly to Plavix.
In some cases, that's because there are some drug-drug interactions, a problem the FDA has recently addressed. The proton pump inhibitors, especially, are metabolized through the CYP2C19 pathway. That's a problem, since that enzyme is needed to convert clopidogrel into its active form (Plavix, as it comes out of the pill, is a prodrug - its thiophene ring needs to get torn open). This sort of thing has been seen many times before - it's one of the many headaches that you can endure in drug development as you profile the metabolizing pathways for your drug candidate and compare them to the other compounds your patient population might be taking. There are some combinations that just will not work (several involving CYP3A4, which is often the first one you test for), and it looks like we can add Plavix/2C19 to the list.
But the population genetics of the 2C19 enzyme are rather heterogeneous. About a third of the patients taking Plavix have a less-active form of the enzyme to start with, and they might not respond as robustly to the drug. The FDA has emphasized this effect in its latest public health warning. That's an opportunity for Effient, since it doesn't go through that metabolic route.
The In Vivo people point out, though, that this story isn't being driven by the usual players. It's not the FDA that's pushed to find this out, and it's not even Eli Lilly. It's Medco and Aetna. They studied their insurance claims data to see if the numbers supported the proton pump inhibitor/Plavix interaction, found that they did, and publicized their findings - and that led to an actual observational trial from BMS and Sanofi, which confirmed the problem. Now Medco is going further, and is actually running its own observational study comparing Plavix and Effient. Their theory is that the efficacy that Lilly showed compared to Plavix was driven by the (deliberate, one assumes) inclusion of a high number of poor metabolizers.
Medco is getting ready for generic Plavix, and trying to keep its costs down by making the case that the drug will do the job just fine for most patients. They could, on the other hand, end up making the case for Effient in that poor-metabolizing third of the patients, which would also be interesting. Lilly would presumably settle for that, although they'd like even more of the market if they can get it, naturally.
And I have to say: I like this sort of thing. I like it a lot. This, to me, is how the system should work. Companies are pursuing their own competing interests, but in the end, we get a higher standard of care by finding out which drug really works for which patients. The motivation to do all this? Money, of course, earning it and saving it. This may sound crass, but I think that's a reliable, proven method to motivate people and companies, one that works even better than depending on their best impulses. You could even build an economic system around such effects, with some attention to channeling these impulses in ways that benefit the greatest number of people. Worth a try.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Regulatory Affairs
November 18, 2009
+ TrackBacks (0) | Category: Blog Housekeeping
I was going over some thermodynamics the other day, and it hit me that this was just the sort of thing I always tried to avoid when I was actually taking chemistry courses in college and grad school. And here I was, looking it up voluntarily and even reading it with some pleasure. A couple of professors of mine would have been rather pleasantly surprised at the sight, though, since physical chemistry (especially) tended to exacerbate my often lazy approach to my course work.
When I look back on it, it's a very good thing that my graduate school curriculum only featured classes during the first year. Because I was trying to get away with more and more by doing less and less, and those two trend lines were heading toward an intersection. (Another example of that from my grad-school past can be found here). In the end, the chrome-plated jaws of destiny did not quite snap shut on my academic career, but it was a near thing. I can well recall being assigned problem sets in a course during my first year of grad school, with a strong probability of having to be called up to the board to work out a random one from the list in front of the professor and the class, and just not getting around to doing them.
So more than once, I'd be called upon to present a problem I hadn't actually bothered to look at. A classmate of mine, Bill, had a similar approach to his work, and he and I would sometimes end up side by side at the board, quietly saying things to each other like "You do any of these?" "Nope, me neither. This one look like the Eyring equation to you?" At the same time, I was ceasing to take notes in the class, finding that (for whatever reason) I wasn't getting much out of the lectures, and seemed to be doing OK by reading the material.
The professor involved noticed me sitting there without a notebook day after day, and called me in for a chat. "You seem to have ceased bringing any sort of writing implement to my lectures", he said. "I presume that there's some reason for that?" I stammered out some line about how I found that I was able to concentrate more on the material when I wasn't having to worry about getting it down on paper, and I could tell that he didn't buy that one for a minute. "I see. . ." he said slowly, and let me go. The next lecture (and you knew this sentence would start out that way), he was up at the board talking about More O'Ferrall plots or something of the sort, and in the middle of explaining one said ". . .then when you move into this quadrant the transition state is affected like so and does that look OK to you, Derek?"
Zzzzzip! Some home-security monitor circuit in my brain tripped, and I returned to reality with the unpleasant sensation of having been dropped into my seat from a helicopter. "Umm. . .no mistakes that I can see", I said, which was certainly true, and the professor gave me a narrow-eyed look. "Yes. . .no doubt".
So no, this couldn't have gone on in that style for too much longer, and it was with relief that I moved on to full-time lab work. But I still have little patience for lectures I find uninteresting. I'm just glad that no one's passing out exams afterwards. . .
+ TrackBacks (0) | Category: Graduate School
November 17, 2009
There's a new paper out in Nature that presents an intriguing way to look for off-target effects of drug candidates. The authors (a large multi-center team) looked at a large number of known drugs (or well-characterized clinical candidates) and their activity profiles. They then characterized the protein targets by the similarities of the molecules that were known to bind to them.
That gave a large number of possible combinations - nearly a million, actually, and in most cases, no correlations showed up. But in about 7,000 examples, a drug matched some other ligand set to an interesting degree. On closer inspection, some of these off-target effects turned out to be already known (but had not been picked up during their initial searching using the MDDR database). Many others turned out to be trivial variations on other known structures.
But what was left over was a set of 3,832 predictions of meaningful off-target binding events. The authors took 184 of these out to review them carefully and see how well they held up. 42 of these turned out to be already confirmed in the primary literature, although not reported in any of the databases they'd used to construct the system - that result alone is enough to make one think that they might be on the right track here.
Of the remaining 142 correlations, 30 were experimentally feasible to check directly. Of these, 23 came back with inhibition constants less than 15 micromolar - not incredibly potent, but something to think about, and a lot better hit rate than one would expect by chance. Some of the hits were quite striking - for example, an old alpha-blocker, indoramin, showed a strong association for dopamine receptors, and turned out to be an 18 nM ligand for D4, which is better than it does on the alpha receptors themselves. In general, they uncovered a lot of new GPCR activities for older CNS drugs, which doesn't surprise me, given the polypharmacy that's often seen in that area.
But they found four examples of compounds that jumped into completely new target categories. Rescriptor (delavirdine), a reverse transcriptase inhibitor used against HIV, showed a strong score against histamine subtypes, and turned out to bind H4 at about five micromolar. That may not sound like much, but the drug's blood levels make that a realistic level to think about, and its side effects include a skin rash that's just what you might expect from such off-target binding.
There are some limitations. To their credit, the authors mention in detail a number of false positives that their method generated - equally compelling predictions of activities that just aren't there. This doesn't surprise me much - compounds can look quite similar to existing classes and not share their activity. I'm actually a bit surprised that their methods works as well as it does, and look forward to seeing refined versions of it.
To my mind, this would be an effort well worth some collaborative support by all the large drug companies. A better off-target prediction tool would be worth a great deal to the whole industry, and we might be able to provide a lot more useful data to refine the models used. Anyone want to step up?
Update: be sure to check out the comments section for other examples in this field, and a lively debate about which methods might work best. . .
+ TrackBacks (0) | Category: Drug Assays | In Silico | Toxicology
I've gone through the blogroll, clearing out inactive sites and adding new ones. So welcome to Med-Chemist, Chemical Crystallinity,
Synthetic Nature, , Chiral Jones, and P212121!
And I've also added another category for chemistry and pharma database sites. There you'll find a quick way to access the copious piles of information from Drugbank, Emolecules, ChemSpider, PubChem, DailyMed, Druglib, and Clinicaltrials.gov.
As always, if I've left a blog (or your blog!) off the list, drop me an e-mail and let me know about it. If it's of potential interest to the readership here, on it goes.
+ TrackBacks (0) | Category: Blog Housekeeping
I've been remiss in not mentioning this, but I just found out recently that Warren DeLano (the man behind the excellent open-source PyMOL program) passed away suddenly earlier this month. He was 37 - another unfortunate loss of a scientist who had done a lot of fine work and was clearly on the way to doing much more.
I notice that as I write this I have a PyMOL window open on my desktop; I use the program regularly to look at protein structures. Si monumentum requiris, circumspice.
+ TrackBacks (0) | Category: Current Events | In Silico
November 16, 2009
Over the weekend, the results in a small cardiovascular trial came out that compared Merck's Zetia
(ezetimibe/simvastatin) (correction - ezetimibe alone) against Abbott's Niaspan (time-release niacin). Niacin's an underappreciated therapy in the field - it has tolerability problems, mainly irritating and uncomfortable hot flushing, but it really does seem to help normalize lipid numbers. (And that's why Merck itself, among others, have taken cracks at the market).
This latest trial was a small one, but people have been starved for data on Zetia ever since it took a surprising hit (in the ENHANCE trial) suggesting that it might not be very efficacious. There's an ongoing larger trial that should answer this question once and for all, but those numbers won't be showing up for another two years. For now, anything that can help clarify what's going on is of great interest to Merck, its investors, and to cardiologists and their patients.
And Matthew Herper at Forbes is right: these latest numbers are disastrous. The study (funded by Abbott) isn't the greatest piece of clinical research in the world - it didn't study nearly as many patients as it was designed to, since it was halted early. (Here it is in the NEJM). But it still shows Niaspan as clearly superior to Zetia, and it makes a person wonder if taking Zetia is basically an expensive way to take a possibly-inadequate dose of simvastatin. In a way, the relatively small size of the study actually helps it a bit - getting numbers that definitive without having to go to much larger sample sizes isn't so easy in cardiovascular trials, so the feeling is that there much be something here.
As Herper's article details, Merck is trying to spin this as a big win for their competition, not a big loss for their own drug. But that comes close to being logically impossible: cholesterol lowering, like many other therapeutic areas, is nearly a zero-sum game. If patients take Niaspan (or any other competing drug), they're not going to be taking Zetia. This one was certainly a victory for Abbott (and generic niacin, for those who can take it), but it was a loss for Merck as well.
The FDA's not coming out of all this looking very good, either:
"How is it possible for a drug to have $4 billion in sales without any evidence of benefit?" says Harlan Krumholz, a cardiologist at Yale University. He said that the small size of the two imaging studies mean they couldn't render a clear verdict on Zetia. "But they don't instill any confidence in it either. " Douglas Weaver, head of cardiology at the Henry Ford Hospital in Detroit says: "We've used Zetia without sufficient amounts of clinical data to support it. Using it may be right, it may be wrong, but we don't know right now."
But it's worth remembering that Zetia's mode of action made perfect sense, and that it really does lower cholesterol to what you'd think would be a very beneficial degree. But it probably has several other effects beyond simple LDL lowering, and just looking at that number is clearly (in hindsight) not enough of a clinical surrogate marker. As the study authors put it:
If viewed properly, this hypothesis-generating finding is not an indictment of the overall importance of reducing LDL cholesterol for the purpose of preventing cardiovascular events, as illustrated by therapies based on statins or nonstatins (e.g., bile acid sequestrants). Rather, this adverse relationship may be attributable to the net effect of ezetimibe, a drug with diverse actions, not all of which are measured through its effects on intestinal cholesterol absorption and LDL cholesterol level. Taken together with a preexisting concern regarding the clinical effectiveness of ezetimibe, our findings challenge the usefulness of LDL cholesterol reduction as a guaranteed surrogate of clinical efficacy, particularly reduction achieved through the use of novel clinical compounds.
But as I recall, statins themselves were first approved based largely on lowered LDL, with better outcome data only showing up later. In that case, the surrogate marker paid off, but not this time. What all this is telling us, then, is that we don't know nearly as much about cholesterol and cardiology as we thought we did. And if we don't understand that area well enough, after all these years and all this effort, what parts of medicine do we really understand?
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials
November 13, 2009
As many readers may have heard by now, Keith Fagnou of the University of Ottawa has suddenly died from what appears to be H1N1 influenza.
I'm awaiting confirmation of that diagnosis, which is worrisome for all sorts of other reasons, but whatever the cause, this is a loss for synthetic chemisty. Prof. Fagnou had published many interesting and useful papers on catalysis of bond-forming reactions, an area that's been growing steadily in importance for years and shows no signs of faltering. We need all the smart, capable people we can get working on such things, and I'm very sorry that we've lost one. Condolences to his family, colleagues, and friends.
+ TrackBacks (0) | Category: Current Events
When we screen zillions of compounds from our files against a new drug target, what can we expect? How many hits will we get, and what percentage of those are actually worth looking at in more detail?
These are long-running questions, but over the last twenty years some lessons have been learned. A new paper in J. Med. Chem. emphasizes one of the biggest ones: if at all possible, run your assays with some sort of detergent in them.
Why would you do a thing like that? Compound aggregation. The last few years have seen a rapidly growing appreciation of this problem. Many small molecules will, under some conditions, clump together in solution and make a new species that has little or nothing to do with their individual members. These new aggregates can bind to protein surfaces, mess up fluorescent readouts, cause the target protein to stick to their surfaces instead, and cause all kinds of trouble. Adding detergent to the assay system cuts this down a great deal, and any compound that's a hit without detergent but loses activity with it should be viewed with strong suspicion.
The authors of this paper (from the NIH's Chemical Genomics Center and Brian Shoichet's lab at UCSF) were screening against the cysteine protease cruzain, a target for Chagas disease. They ran their whole library of compounds through under both detergent-free and detergent conditions and compared the results. In an earlier screening effort of this sort against beta-lactamase, nearly 95% of the hits (many of them rather weak) turned out to be aggregator compounds. This campaign showed similar numbers.
There were 15 times as many apparent hits in the detergent-free assay, for one thing. Some of these were apparently activating the enzyme, which is always a bit of an odd thing to explain, since inhibiting enzyme activity is a lot more likely. These activators almost completely disappeared under the detergent conditions, though. And even looking just at the inhibitors, 90% of the hit set in the detergent-free assay went away when detergent was added. (I should note that control cruzain inhibitors performed fine under both sets of assays, so it's not like the detergent itself was messing with the enzyme to any significant degree).
They point out another benefit to the detergent assay - it seems to improve the data by keeping the enzyme from sticking to the walls of the plastic tubes. That's a real problem which can kick your data around all over the place - I've encountered it myself, and heard a few horror stories over the years. But it's not something that's well appreciated outside of the people who set up assays for a living (and not always even among some of them).
So, let's get rid of those nasty aggegators, right? Not so fast. It turns out that some of the compounds that showed this problem during the earlier beta-lactamase work didn't cause a problem here, and vice versa. Even using different assays designed to detect aggregation alone gave varying results among sets of compounds. It appears that aggregation is quite sensitive to the specific assay conditions you're using, so trying to assemble a blacklist of aggregators is probably not going to work. You have to check things every time.
One other interesting point from this paper (and the previous one): curators of large screening collections spend a lot of time weeding out reactive compounds. They don't want things that will come in and react nonspecifically with labile groups on the target proteins, and that seems like a reasonable thing to do. But in these screens, the compounds with "hot" functional groups didn't have a particularly high hit rate. You'd expect a cysteine protease to be especially sensitive to this sort of thing, with that reactive thiol right in the active site, but not so. This ties in with the work from Benjamin Cravatt's group at Scripps, suggesting that even fairly reactive groups have a lot of constraints on them - they have to line up just right to form a covalent bond, and that just doesn't happen that often.
So perhaps we've all been worrying too much about reactive compounds, and not enough about the innocent-looking ones that clump up while we're not looking. Detergent is your friend!
+ TrackBacks (0) | Category: Drug Assays | Life in the Drug Labs
November 12, 2009
There's a disturbing article out at the New England Journal of Medicine on studies conducted on Neurontin (gabapentin) for various unapproved indications. Parke-Davis (and later Pfizer) looked at a wide range of possible indications for the drug - migraine, neuropathic pain, bipolar disorder, and more. That in itself isn't unusual, since CNS drugs often have rather broad and poorly defined mechanisms, and it's not like we understand any of them all that well.
What is unusual is the pattern found when comparing the internal reports with the published versions that showed up in the literature. The authors found that:
"More than half the clinical trials that we included in our analysis (11 of 20) were not published as full-length research articles. For 7 of the 9 trials that were published as full-length research articles, a statistically significant primary outcome was reported, and for more than half these trials, the outcome specified in the published report differed from the outcome originally described in the protocol. Three of the four trials with an unchanged primary outcome had statistically significant results for the protocol-specified primary outcome. Secondary outcomes also frequently differed between the protocol and the published report. Thus, trials with findings that were not statistically significant (P≥0.05) for the protocol-defined primary outcome, according to the internal documents, either were not published in full or were published with a changed primary outcome. . .all the changes that took place between what was specified in the protocol, what was known before publication (as presented in the internal company research reports), and what was reported to the public led to a more favorable presentation in the medical literature. . ."
The authors go on to point out that changing a primary outcome after you see the data is, in fact, a statistical sin (although that's not quite the phrase they use!) You really can't go around doing that, because you can end up chasing after random chance (and avoiding that is the whole point of running well-controlled trials). This does not cover Pfizer and Parke-Davis with glory, but it's worth noting that there's plenty of blame to go around when it comes to this practice:
"Our study is based on a relatively small number of trials undertaken to test a single drug manufactured by a single company and its successors. Furthermore, if a major purpose of the studies we examined was to promote off-label uses of gabapentin, the selective reporting we observed could be more extreme than that observed for studies conducted for other reasons. Previous studies in different settings have shown evidence of these same biases, however. Indeed, selective outcome reporting does not appear to be limited to studies funded by drug companies. Chan and colleagues examined published trials funded by the Canadian Institutes of Health Research and found that 40% of stated primary outcomes differed between the protocol and the published report. In addition, we cannot be certain that selective reporting was a decision made by employees of Pfizer and Parke-Davis, since the authors of the published reports included nonemployees. We did not systematically assess the methodologic quality of the included trials as described in the publications we examined. Previous research has indicated that quality scores are higher for trials conducted by the pharmaceutical industry than for trials conducted by not-for-profit entities, although reports from industry-sponsored trials have potentially distorted the scientific record because of other, less easily measured study factors."
That doesn't get the folks who conducted these gabapentin studies off the hook, although I should note that Pfizer disputes the conclusions of this article (as you'd certainly think that they would). And it's also worth noting that some of its authors have done work for the plaintiffs in suits against Pfizer over gabapentin (thus all the familiarity with the internal company documents, which came to light during discovery proceedings). But again, I don't see how that negates the paper's conclusions, and if Pfizer has any hard data that would do so, I think they should produce it with all speed.
And no, it's just a coincidence that this post involve Pfizer, after I've been going on about their merger business all week. Unfortunately, I think that they're probably not the only company that could be pointed at. But we in the industry shouldn't have things like this for others to uncover in the first place. Should we?
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System | The Dark Side | The Scientific Literature
November 11, 2009
Or perhaps one should wait until spring - it's the wrong season for high-nitrogen mixtures to be applied:
Speaking at the Reuters Health Summit on Wednesday, Kindler said Pfizer has melded and reshaped its research and development facilities within 20 days of buying Wyeth on October 15. With previous huge mergers, he said, that process had taken "literally years."
. . .Swift reorganization of the two companies' research operations stands in contrast to "the distractions, the disruptions and the delays that have plagued mergers of our company and others in the past," Kindler said.
+ TrackBacks (0) | Category: Business and Markets
With the waves of layoffs going on, and all the nasty structural changes we're seeing in this business, it's easy to start feeling a toxic combination of fear and despair. And while I understand that, I'm going to try to briefly argue against it.
(1) I think that, in the years to come, that people are most definitely going to need medicines. And by that, I mean new ones, because there are a lot of conditions out there that we can't treat very well. As the world gets (on the average) older and wealthier, this need will do nothing but increase. In many cases, pharmaceutical treatment is cheaper than waiting and having surgery or the like, so there's a large scale cost-saving aspect to this, too.
(2) I also think that many of these medicines are still going to be small molecules. Now, biological products can be very powerful, and can do things that we can't (as yet) do with small molecules - mind you, the reverse is true, too. And I think that biologics will gradually increase their share of the pharma world as we find out more about how to make and administer them. But it is very hard to beat an orally administered small molecule for convenience, cost, and patient compliance, and those are three very big factors.
(3) What we're witnessing now is a huge argument about how we're going to make those small molecule drugs, where we're going to make them, and who will do all those things. And it's driven by money, naturally. We don't have enough new products on the market, which means that we have to sell the ones we have like crazy (which leads to all sorts of other problems, legal and otherwise). At the same time, we're having to spend more and more money to try to get what drugs we can through the whole process. These trends appear unsustainable, especially when running at the same time.
(4) But as Herbert Stein used to say, if something can't go on, then it won't. Right now, the only way out that companies can see is to cut costs as hard as possible (and market as hard as possible). Those both bring in short-term results that you can point at. Long-term, well. . .probably not so good. But in that same long term, we're going to have to find better ways of discovering and developing drugs. If we can improve that process, the fix can come from that direction rather than from the budget-cutting one.
(5) And those improvements don't have to be incredible to make a big difference. We have a 90% failure rate in the clinic as it stands. If we could just work it to where we only lose 8 out of 10 drug candidates, that would double the number of new drugs coming to the market, which would cheer everyone up immensely.
(6) The questions are: can we improve R&D in time? Can we improve it with the resources we have? I think that the demand (and thus the potential rewards) is too great for a solution not to be found, if there's one out there. And we still know so little about what we do that I can't imagine that answers aren't out there somewhere. Who's going to find them? How long will it take? Where are they? I've no clue. But that looks like the way out to me.
+ TrackBacks (0) | Category: Business and Markets | Current Events | Drug Industry History
In an attempt to get the story out to a wider audience, I have a piece up at The Atlantic's Business site on the Pfizer layoffs, the J&J layoffs, and what's happening to the traditional expectations for the way research is done. This is going to be a long process, though, and I keep wondering if we're still just in the early parts of it. . .
+ TrackBacks (0) | Category: Business and Markets
A reader who's (unfortunately) in a position to know the details sends along some numbers on Pfizer's chemistry shakeout. According to his figures, Pfizer (pre-Wyeth merger) had about 900 chemists. The Wyeth deal brought in about 350, but no one expected the merged department to stay at 1250 - instead, the guess was that the new chemistry staff would be in the 1000 range, which is what I would have guessed, too.
But the chemistry head count is now apparently headed to about 850: smaller than it was before the merger. I have to assume that outsourced chemistry isn't included in this total, and that that's where the deficit is being made up. It is being made up, right? Pfizer isn't actually trying to become a bigger company with a smaller research staff - right? Posters and coffee mugs about working smarter and doing more with less can only take you so far, you know.
As I say, these are numbers from the inside, and I'll be glad to listen to (and post) corrections to them. But from what I'm hearing, this is accurate - and no one (especially at Wyeth) saw this coming on as hard as it has. . .
+ TrackBacks (0) | Category: Business and Markets
November 10, 2009
I mentioned the H-Cube hydrogenation machine here a couple of years ago as an early example of a commercial flow chemistry machine. As some readers may have guessed, my recent post on hydrogenations was partly inspired by a recent run of activity on this instrument, which came in quite handy.
Until the last couple of days, that is. Now there's a problem, and I'd be glad to hear from any H-Cube users who might know how to solve it. (If you haven't used one, you can probably bail out right now!) What's going on is: when I try to run a hydrogenation in "Full H2" mode, everything works fine until the H2 valve closes. The pump's fine, the flow through the instrument is fine. . .until the status switches to "Running". At that point the flow stops momentarily, then a gout of solvent runs from the outlet all at once, and then. . .nothing. Well, nothing except hydrogen gas - if I dip the outlet tube below the surface of some solvent, I can see that it's still producing that. But there's no flow. Lifting the solvent inlet from the reservoir, I can see that nothing's being taken up - an air bubble forms at the inlet, and just moves up and down.
So there's something going on when the system starts letting hydrogen into the flow, but I'm not sure what that might be. I can always call in the $250/hr folks, but I thought that throwing my problems out onto the blog was at least worth a try. Just to take care of some obvious fixes, so far I've cleaned the metal frit, replaced the Teflon membrane, sonicated the check valve, and tried changing catalyst cartridges. Anyone got any clues after that?
+ TrackBacks (0) | Category: Life in the Drug Labs
Pharmaconduct.org has another look at Pfizer's announcement yesterday, and tries to address some of the many unanswered questions left open by the company's press release. One thing that struck me (and many others) is that the company talked about "moving a number of functions" from sites like St. Louis and Collegeville, but did not come right out and say that they were closing. I understand that there's more than R&D that goes on in these places, but it still seems as if these moves will leave a lot of empty hallways, which you wouldn't think is the optimum solution.
A topic of local discussion has been the two Cambridge sites the new company has, and you can argue that one either way, too. "They do different things, and both of them should stay" goes up against "Why would you have two research sites in the same town if you didn't need to?" Yesterday's release was silent on this question, too.
Eric at Pharmaconduct has gone so far as to put together a database of Pfizer's moves over the last few years, in an attempt to figure out what they're up to. I wish him luck, and I'll follow the success of this effort with interest. I'm not sure if the company's behavior is subject to this kind of field-zoologist approach, but perhaps it is. At any rate, people with information to contribute can help him find out.
+ TrackBacks (0) | Category: Business and Markets
November 9, 2009
The company has issued a press release detailed which sites are staying, and which are leaving:
Pfizer will have five main research sites that will serve as central hubs for research activities in BioTherapeutics, PharmaTherapeutics and Vaccines. These sites are: Cambridge, Mass.; Groton, Conn.; Pearl River, N.Y.; La Jolla, Calif.; and Sandwich, U.K. These research-oriented laboratories will be supplemented by specialized research capabilities, such as monoclonal antibody discovery in San Francisco, regenerative medicine work in Cambridge, U.K., and research and development activities in Shanghai, China. . .
. . .As part of the consolidation of research sites, Pfizer will significantly reduce R&D activities at some of its sites. The company will move a number of functions from Collegeville, Pa.; Pearl River, N.Y.; and St. Louis to other locations and will discontinue R&D operations in Princeton, N.J.; Chazy, Rouses Point and Plattsburgh, N.Y.; Sanford and Research Triangle Park, N.C.; and Gosport, Slough/Taplow, U.K. In addition, Pfizer will consolidate R&D functions from its New London, Conn., site to its nearby research facility in Groton, Conn.
What we don't know (yet) is how many people will be let go from these sites, and how many will be offered a moving package. Of course, last time around, some people moved and were let go in yet another round, but the future is unwritten. . .more on this as more details emerge.
+ TrackBacks (0) | Category: Business and Markets
The latest issue of Nature Medicine has several short articles on patent issues, and is well worth a look if that's your sort of thing. I enjoyed this from the issue's lead editorial:
"An informal poll we conducted while preparing the focus on patents appearing in this issue (pp 1239–1243) disclosed that about two-thirds of scientists, particularly in Europe, don't know who owns the intellectual rights to the discoveries made in their labs. A similarly high proportion don't know if there are any provisions in their job contracts assigning them any rights over their discovery. And roughly half don't even know whether they are legally entitled to open a company based on their research."
As the piece goes on to explain, these turn out not to be particularly hard questions to answer. They are, in fact, answered in just the way you think they are, which makes me wonder a bit about the people who don't know these things. Let's see, even though you're not a lawyer, and don't know much about these things, and you're signing an agreement with a large entity that's very interested in this subject and can afford to pay for good legal advice about it. . .hmm, I wonder who could possibly have the advantage here?
+ TrackBacks (0) | Category: Patents and IP
There's a long, detailed article up over at Bloomberg on the recent run of huge fines for off-label promotion of drugs. Pfizer, Lilly, Bristol-Meyers Squibb, and Schering-Plough all get mentioned in great detail.
And there's a key point from the whole depressing thing: the reason that marketing departments do this kind of thing is that it makes money. Even after you pay a billion dollars in fines, you can still come out ahead, and you might not even have to pay the fines. It's just being put down as a cost of doing business - it's a speeding ticket, and it's being weighed against the cost of driving under the legal limit.
But there's no way that our industry will gain - or regain - respect as long as we operate this way. Have the people involved priced that out as well?
+ TrackBacks (0) | Category: Business and Markets | The Dark Side | Why Everyone Loves Us
November 6, 2009
So what are we up to now, Day Three of Greater Merck? The merger with Schering-Plough went through earlier this week, and you won't get any more numbers by searching the stock tickers for SGP.
I find that weird, since I started my career there in the late 1980s/early 1990s. But while I was there, it seemed like there were mergers and rumors of mergers every few weeks. That's no doubt a hindsight-enhanced picture I have, but it's safe to say that I heard about S-P merging (or being purchased by) every single major player in the business during my years there. And it didn't happen (not then, at any rate).
My favorite moment came in about 1992 when a colleague came to my office one afternoon saying "It's us and Upjohn. Announced after the close of business on Friday. All of CNS is going to Kalamazoo". I hardly even looked up, uttering a one-word reply that compared this news flash to bovine waste.
"Why do you say that?", he replied. "You don't think it could happen?" "Of course I thing it could happen", I said. "But I'll bet against any specific prediction of when and who. Got any money on you?" "Why don't you think this is the real thing?" he asked again, to which I replied "Because I don't think that any deal this size, set to be announced on Friday, could be so screwed up that you and I would know about it on Tuesday afternoon".
"Well, I kind of see your point there. . .", he began. And of course that particular deal never happened. But I'm sure that there were others that nearly did. That's one of the things that goes on in the background of this industry - there are a lot of tentative discussions and what-if ideas that get looked at briefly (or sometimes not so briefly) which people outside of upper management never hear about. This stuff generally starts to leak (if it does) once it gets closer to really happening, and for every one that happens, there are several that get thought about but never quite work.
Of course, I'm using "work" in the sense of "get completed", not in the sense of "works out in the long run to the benefit of everyone involved". I'm not convinced that many drug company mergers fall into that latter category at all, and that goes for the Merck/Schering-Plough one, too. There don't seem to be any dramatic announcements coming out of the deal so far, and that probably means that the changes (which are, and have to be, coming) will just be delayed while the company takes stock of what it now has, and what it now is.
But, as someone from another company was saying to me last night, the bigger you are, the harder it is to do that. It takes longer before you feel that there's enough information to make a good decision, which is probably why Pfizer's current rearrangements are taking so agonizingly long to make themselves clear. That same decision-making extends, I think, to drug discovery and development issues, which is one reason I don't like the whole mega-company idea to start with.
There's also the groupthink problem. Pfizer, for example, was able to convince itself that inhaled insulin was going to be a big winner, even as people outside the company wondered if that could be quite right. (And not only was it not a big seller, it was an unprecedented disaster). I don't believe that people get any smarter in large groups. Quite the contrary. All that "wisdom of crowds" stuff, as I understand it, is about consulting large numbers of individual thinkers, not getting them all into one room and having them agree on something. Especially if some of the people in the room can decide the salaries and promotions of the rest of the crowd.
I wish both the Merck people and the Schering-Plough people well, and the combined company good fortune, and that's not just because I find myself a stockholder of it. But I wish it hadn't come to this, and I wish it wouldn't keep coming to this, either.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
November 5, 2009
Resveratrol's a mighty interesting compound. It seems to extend lifespan in yeast and various lower organisms, and has a wide range of effects in mice. Famously, GlaxoSmithKline has expensively bought out Sirtris, a company whose entire research program started with resveratrol and similar compound that modulate the SIRT1 pathway.
But does it really do that? The picture just got even more complicated. A group at Amgen has published a paper saying that when you look closely, resveratrol doesn't directly affect SIRT1 at all. Interestingly, this conclusion has been reached before (by a group at the University of Washington), and both teams conclude that the problem is the fluorescent peptide substrate commonly used in sirtuin assays. With the fluorescent group attached, everything looks fine - but when you go to the extra trouble of reading things out without the fluorescent tag, you find that resveratrol doesn't seem to make SIRT1 do anything to what are supposed to be its natural substrates.
"The claim of resvertraol being a SIRT1 activator is likely to be an experimental artifact of the SIRT1 assay that employs the Fluor de Lys-SIRT1 peptide as a substrate. However, the beneficial metabolic effects of resveratrol have been clearly demonstrated in diabetic animal models. Our data do not support the notion that these metabolic effects are mediated by direct SIRT1 activation. Rather, they could be mediated by other mechanisms. . ."
They suggest activation of AMPK (an important regulatory kinase that's tied in with SIRT1) as one such mechanism, but admit that they have no idea how resveratrol might activate it. Does that process still require SIRT1 at all? Who knows? One thing I think I do know is that this has something to do with this Amgen paper from 2008 on new high-throughput assays for sirtuin enzymes.
One wonders what assay formats Sirtris has been using to evaluate their new compounds, and one also wonders what they make of all this now at GSK. Does one not? We can be sure, though, that there are plenty of important things that we don't know yet about sirtuins and the compounds that affect them. It's going to be quite a ride as we find them out, too.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Drug Assays
November 4, 2009
You're supposed to disclose conflicts of interest if you're the author of a scientific paper. For the most part, everyone does, but it's those times that the system breaks down that cause all the trouble. Does this author actually earn a side income from Company X? Is that author actually about to start a new company based on the discovery that's being reported so breathlessly? And does this other author have a big stock position in company Y, whose price will be affected by this new paper? Journal editors want to know about these things, as do readers.
But how far do we go with this idea? An editorial in BioCentury (free PDF version) takes up arms against a new rule for non-financial disclosures from the International Committee of Medical Journal Editors. It requires authors to "report any personal, professional, political, institutional, religious, or other associations that a reasonable reader would want to know about in relation to the submitted work". And it's the inclusion of the words "personal", "political", and "religious" that could cause trouble.
Or maybe it's the inclusion of the word "reasonable". That's a common legal-argument adjective, but on the whole, people are not reasonable when it comes to their political or religious beliefs. (You may have noticed that there's been a debate for several centuries now about whether religious belief has anything to do with reason at all, but I think we'll try to stay out of that one). A dedicated atheist may consider it quite reasonable to want, say, any biological publication featuring Francis Collins of NIH to always feature a statement that Collins is a born-again Christian with a strong interest in reconciling his beliefs with scientific practice. An evangelical Christian reader, on the other hand, may want to have the biology papers flagged for the authors who do not see the hand of a Creator in their field of study. Which of these is "reasonable", if either?
The situation doesn't get any easier when you move towards politics. Do we really want to start listing party affiliations or the like? I realize that the journal editors have no intention of doing any such thing, but no one ever intends for the worms to get so far out of the can, either. When a really contentious issue comes up (such as global warming), plenty of reasonable readers (or perhaps I mean readers who are otherwise reasonable!) would want to see the complete political disclosure done on the authors of every paper, the better to sniff out Error, Self-Interest, and Collusion from either side of the debate.
How are we going to draw these particular lines, and how are we going to draw them in any kind of consistent fashion? Consistency is going to be very hard to achieve. The BioCentury piece points out a recent major disclosure glitch by the editors of the New England Journal of Medicine, and if we go into the full empty-out-your-pockets mode, I worry that the arguments may never cease.
And I've even made it to the last paragraph without mentioning the libertarian none-of-your-business objections to the whole idea. Your thoughts?
+ TrackBacks (0) | Category: The Scientific Literature
November 3, 2009
Johnson & Johnson says that it could be cutting up to 8,000 jobs. This has been in the wind for a while, but I haven't had any reports yet of what it's doing on the ground to the research sites. Any news from the readers affected, or is that yet to come?
+ TrackBacks (0) | Category: Business and Markets
Back in late September I wrote about a controversial paper in the Proceedings of the National Academy of Sciences. It attracted comment for its way-out-there hypothesis: that caterpillars and other larvae arose through a spectacular interspecies gene transfer rather than through conventional evolutionary processes. And it may have been the last paper to make it into the journal by the now-eliminated "Track III" route, which allowed members to essentially cherry-pick their own reviewers. This paper may well have hastened the disappearance of that system, actually - it created quite an uproar.
At the time, I wrote that the paper's hypothesis seemed very likely to be wrong, but at least the author had proposed some means to test it. Now in the latest PNAS come a letter and a full article on the subject. Both mention the testability of the original paper, and go on to point out that such tests have already been done. The paper is written in a tone of exasperation:
Williamson suggested that "many corollaries of my hypothesis are testable." We agree and note that most of the tests have already been carried out, the results of which are readily available in the recent literature and online databases. Here, we set aside (i) the complete absence of evidence offered by Williamson in support of his hypothesis, (ii) his apparent determination to ignore the enormous errors in current understanding of inheritance, gene expression, cell fate specification, morphogenesis, and other phenomena that are implied by his hypothesis, and (iii) the abundant empirical evidence for the evolution and loss of larval forms by natural selection. Instead, we focus on Williamson's molecular genetic predictions concerning genome size and content in insects, velvet worms, and several marine taxa, and we point out the readily available data that show those predictions to be easily rejected.
And you know, they really should set aside those first three points. Entertaining as it is to read this sort of thing, the real way to demolish a paper like Williamson's is to rip it up scientifically, rather than hurl insults at it (however well-justified they might be). There seems to be plenty of room to work in. For example, Williamson predicts that a class of parasitic barnacle will be found to not be barnacles at all, and to have an abnormally large genome, with material from three different sorts of organisms. Actually, though, these organisms have smaller genomes than usual, and from their genes they appear to be perfectly reasonable relatives of other barnacles.
And so on. Williamson predicts that the genomes of insects with caterpillar-like larval stages will tend to be larger than those without, but the data indicate, if anything, a trend in the opposite direction. His predictions for specific insects don't pan out, nor do his predictions about the genome size of velvet worms and many other cases. If I read the paper right, not one of Williamson's many predictions actually goes his way. In some cases, he appears to cite genome size data that line up with his hypothesis, but miss citing similar organisms that contradict it.
So that would appear to be that. Indeed, as the authors of the latest PNAS paper mention, one might have thought so years ago, since these very authors have shot down some of Williamson's work before. That's the real problem here. I have a lot of sympathy for people who are willing to be spectacularly wrong, but that starts to evaporate when they don't realize that they've been spectacularly wrong. Williamson appears to have had a fair hearing for his ideas, and as far as I can tell, they've come up well short. And while we need brave renegades, cranks are already in long supply.
+ TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why
November 2, 2009
There's a constant running battle in the drug industry between the two kinds of pharmaceutical companies: the ones who discover the drugs first, and the ones who sell the drugs cheaply after the patents have expired. It surprises me still how many people I run into (outside my work) who don't make that distinction, or who don't even realize that there is one.
But the generic industry is a very different place. Their research budgets are far smaller than the ones at the discovery companies, since they're only dealing with drugs that everyone knows to already work. Their own research is directed toward satisfying the regulatory requirements that they're making the equivalent substance, and to finding ways to make it as cheaply as possible. And some of them are very good at it - some ingenious syntheses of marketed drugs have come out of the better generic shops. Of course, some real head-shaking hack work has, too, but that you can find everywhere.
The tension between the two types of company is particularly acute when a big-selling drug is nearing its patent expiration. It's very much in the interest of the generic companies to hurry that process along, so often they challenge the existing patents on whatever grounds they can come up with, figuring that the chances of success jutify the legal expenses. Since the 1984 Hatch-Waxman act, there's been an even greater incentive, the so-called "Paragraph IV" challenge. A recent piece in Science now makes the case that this process has gotten out of control.
After four years of a drug's patent life, a generic company can file an Abbreviated New Drug Application (ANDA) and challenge existing patents on the grounds that they're either invalid or that the ANDA doesn't infringe them. (This, for example, is what happened when Teva broke into Merck's Fosamax patent, taking the drug generic about four years early). If the challenge is successful, which can take two or three years to be resolved, the generic company gets an extra bonus of 180 days of exclusivity. The authors of the Science piece say that this process is tipped too far toward the generic side, and it's cutting too deeply into the research-based companies. (As noted here, that's rather ironic, considering the current debate about such provisions for biologic drugs, where some parties have been citing the Hatch-Waxman regime as a wonderful success story in small molecules).
This all took a while to get rolling, but the big successes (such as the Fosamax example) have bred plenty of new activity. There are now five times as many Paragraph IV challenges as there were at the beginning of the decade. Teva, for example, which is one of the big hitters in the generic world, had 160 pending ANDAs in 2007, of which 92 were running under Paragraph IV. Here's a look at some recent litigation in the area, which has certainly enriched various attorneys, no matter what else it's done.
Under Hatch-Waxman, a new drug starts off with five years of "data exclusivity" during which a generic version can't be marketed. The Science authors argue that the losses from Paragraph IV now well outweigh the gains from this provision, and that the term should be extended (which would put it closer to those found in Europe, Canada, and Japan. They also bring up the possibility of selectively extending data exclusivity case-by-case or for certain therapeutic areas, but I have to say, this makes me nervous. There are too many opportunities for gamesmanship in that sort of system, and I think that one goal of a regulatory regime should be to make it resistant to that sort of thing.
But I do support the article's main point, which is that the whole generic industry depends on someone doing to the work to discover new drugs in the first place, and we want to make sure that this engine continues to run. Politically, though, anything like this will be a very hard sell, since it'll be easy to paint it as a Cynical Giveaway to the Rapacious and Hugely Profitable Drug Companies. But speaking as someone working for the RHPDCs, I can tell you that we are indeed having a tougher time coming up with the new products with which to exploit the helpless masses. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History | Drug Prices | Patents and IP | Regulatory Affairs