About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
June 30, 2015
When you look at the stock charts of the major pharma companies, there's not a lot of excitement to be had. Until you get to Eli Lilly, that is. Over the last year, the S&P 500 is up about 5%, and most of the big drug stocks are actually negative (Merck -0.4%, Sanofi down 6%, J&J down 7%, AstraZeneca down 13%). Pfizer pulled away from the index in February, and has held on to that gain (up 13% from a year ago), but Lilly - those guys were doing about as well as Pfizer until the last month or two, but have just ratcheted up since then, for a 1-year gain of over 32%. Why them?
It's all Alzheimer's speculation, as this Bloomberg piece goes into. And as has been apparent recently, Alzheimer's is getting a lot of speculation these days. BIogen really revved things up with their own early-stage data a few months back, and since then, if you're got an Alzheimer's program - apparently, any Alzheimer's program whatsoever - you're worth throwing money at. Lilly, of course, has been (to their credit) pounding away at the disease for many years now, expensively and to little avail. One of their compounds (a gamma-secretase inhibitor) actually made the condition slightly worse in the treatment group (more here), while their beta-secretase inhibitor failed in the usual way. But they've also been major players in the antibody field. Their solanezumab was not impressive in the clinic, except possibly in the subgroup of early-stage patients, and Lilly (showing a great deal of resolve, and arguably some foolhardiness) has been running another Phase III trial in that population.
They also extended the existing trial in that patient group, and are due to report data on that effort very soon - thus the run-up in the company's stock. This is going to be very interesting, for sure - it would be great for Alzheimer's patients (and for Lilly) if the results are clearly positive, but that (sad to say) is the least likely outcome. (I'm not just being gloomy for the sake of being gloomy - Alzheimer's antibodies have had a very hard time showing efficacy under any circumstances, and the all-mechanisms clinical success rate against the disease is basically zero). The same goes, of course, for the new Phase III trial itself. Things could well come out clearly negative, with the possible good results from the earlier trial evaporating the way subgroup analyses tend to when you lean on them. Or - and this is the results I fear the most - there could be wispy sorta-kinda hints of efficacy, in some people, to some degree. Pretty much like the last trial, after which Lilly began beating the PR drums to make things look not so bad.
The reason I think that this would be the worst result is that there is so much demand for something, for anything that might help in Alzheimer's that there would be a lot of pressure on the FDA to approve Lilly's drug, even if it still hasn't proven to do much. And this latest trial really is its best chance. It's in exactly the population (the only population) that showed any possible efficacy last time, so if the numbers still come out all vague and shimmery under these conditions, that's a failure, as far as I can see. No one wants to be in the position of explaining statistics and clinical trial design to a bunch of desperate families who may be convinced that a real Alzheimer's drug is being held up by a bunch of penny-pinching data-chopping bureaucrats.
And this brings us to TauRx. I still get mail about them, seven years after they made big news with a methylene-blue-based Alzheimer's therapy program. When last heard from, they were in Phase III, with some unusual funding, but there were no scientific results from them for a while. The company, though, has published several papers recently (many available on their web site), talking about their program.
Here's a paper on their Phase II results. It's a bit confusing. Their 138 mg/day dose was the most effective; the higher dose was complicated by PK problems (see below). When you look at the clinical markers, it appears that the "mild" Alzheimer's patients were hardly affected at all (although the SPECT imaging results did show a significant difference on treatment). But the "moderate" Alzheimer's treatment group showed several differences in various cognitive decline scores at the 138mg/day dose, but no difference in SPECT at all. Another paper, from JBC talks about compound activity in various cell models of tau aggregation. And this one, from JPET, is their explanation for the PK trouble. It appears that the redox state of the methylene blue core has a big effect on dosing in vivo. There are problems with dissolution, absorption (particularly in the presence of food), and uptake of the compound in the oxidized (methylene blue) state (which they abbreviate as MTC, methylthioninium chloride), but these can be circumvented with a stable dosage form of the reduced leuco compound (abbreviated as LTMX). There's apparently a ph-dependent redox step going on in gastric fluid, so things have to be formulated carefully.
One of the other things that showed up in all this work was a dose-dependent hematological effect, apparently based on methylene blue's ability to oxidize hemoglobin. It's not known (at least in these publications) whether dosing the reduced form helps out with this, but it's potentially a dose-limiting toxicity. So here's the current state of the art:
Although we have demonstrated that MTC has potential therapeutic utility at the minimum effective dose, it is clear that MTC has significant limitations relative to LMTX, which make it an inferior candidate for further clinical development. MTC is poorly tolerated in the absence of food and is subject to dose-dependent absorption interference when administered with food. Eliminating the inadvertent delayed-release property of the MTC capsules did not protect against food interference. Therefore, as found in the phase 2 study, MTC cannot be used to explore the potential benefit of higher doses of MT. Nevertheless, the delayed-release property of the MTC capsules permitted the surprising discovery that it is possible to partially dissociate the cognitive and hematologic effects of the MT moiety. Whether the use of LMTX avoids or reduces the undesirable hematologic effects remains to be determined. . .
The Phase III trials are ongoing with the reduced form, and will clearly be a real finger-crossing exercise, both for efficacy and tox. I wish TauRx luck, though, as I wish everyone in the AD field good luck. None of us, you know, are getting any younger.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Assays | Pharmacokinetics | Toxicology
June 24, 2015
Here's another Big Retrospective Review of drug pipeline attrition. This sort of effort goes back to the now-famous Rule-of-Five work, and readers will recall the Pfizer roundup of a few years back, followed by an AstraZeneca one (which didn't always recapitulate the Pfizer pfindings, either). This latest is a joint effort to look at the 2000-2010 pipeline performance of Pfizer, AstraZeneca, Lilly, and GSK all at the same time (using common physical descriptors provided to a third party, Thomson Reuters, to deal with the proprietary nature of the compounds involved). The authors explicitly state they've taken on board the criticisms of these papers that have been advanced in the past, so this one is meant to be the current state of the art in the area.
What does the state of the art have to teach us? 812 compounds are in the data set, with their properties, current status, and reasons for failure (if they have indeed failed, and believe me, those four companies did not put eight hundred compounds on the market in that ten-year period). The authors note that there still aren't enough Phase III compounds to draw as many conclusions as they'd like: 808 had a highest phase described, 422 of those were still preclinical, 231 were in Phase I, 145 in Phase II, 8 were in Phase III and 2 in Phase IV/postmarketing studies. These are, as the authors not, not quite representative figures, compared to industry-wide statistics, and reflect some compounds (including several that went to market) that the participants clearly have left out of their data sets. Considering the importance of the (relatively few) compounds in the late stages, this is enough to make a person wonder about how well conclusions from the remaining data set hold up, but at least something can be said about earlier attrition rates (where that effect is diluted).
605 of the compounds in the set were listed as terminated projects, and 40% of those were chalked up to preclinical tox problems. Second highest, at 20% was (and I quote) "rationalization of company portfolios". I divide that category, myself, into two subcategories: "We had to save money, and threw this overboard" and "We realized that we never should have been doing this at all". The two are not mutually exclusive. As the paper puts it:
. . .these results imply that substantial resources are invested in research and development across the industry into compounds that are ultimately simply not desired or cannot be progressed for other reasons (for example, agreed divestiture as part of a merger or acquisition). In addition, these results suggest that frequent strategy changes are a significant contributor to lack of research and development success.
You think? Maybe putting some numbers on this will hammer the point home to some of the remaining people who need to understand it. One can always hope. At any rate, when you analyze the compounds by their physiochemical properties, you find that pretty much all of them are within the accepted ranges. In other words, the lessons of all those earlier papers have been taken on board (and in many cases, were part of med-chem practice even before all the publications). It's very hard to draw any conclusions about progression versus physical properties from this data set, because the physical properties just don't very all that much. The authors make a try at it, but admit that the error bars overlap, which means that I'm not even going to bother.
What if you take the set of compounds that were explicitly marked down as failing due to tox, and compare those to the others? No differences in molecular weight, no differences in cLogP, no differences in cLogD, and no differences in polar surface area. I mean no differences, really - it's just solid overlap across the board. The authors are clearly uncomfortable with that conclusion, saying that ". . .these results appear inconsistent with previous publications linking these parameters with promiscuity and with in vivo toxicological outcomes. . .", but I wonder if that's because those previous publications were wrong. (And I note that one such previous publication has already come to conclusions like these). Looking at compounds that failed in Phase I due to explicit PK reasons showed no differences at all in these parameters. Comparing compounds that made it only to Phase I (and failed for any reason) versus the ones that made it to Phase II or beyond showed, just barely, a significant effect for cLogP, but no significant effect for cLogD, molecular weight, or PSA. And even that needs to be interpreted with caution:
. . .it is not sufficiently discriminatory to suggest that further control of lipophilicity would have a significant impact on success. Examination of how the probabilities of observing clinical safety failures change with calculated logP and calculated logD7.4 by logistic regression showed that there is no useful difference over the relevant ranges. . .
So, folks, if your compounds most fit within the envelope to start with (as these 812 did), you're not doing yourself any good by tweaking physiochemical parameters any more. To me, it looks like the gains from that approach were realized early on, by trimming the fringe compounds in each category, and there's not much left to be done. Those PowerPoint slides you have for the ongoing project, showing that you've moved a bit closer to the accepted middle ground of parameter space, and are therefore making progress? Waste of time. I mean that literally - a waste of time and effort, because the evidence is now in that things just don't work that way. I'll let the authors sum that up in their own words:
It was hoped that this substantially larger and more diverse data set (compared with previous studies of this type) could be used to identify meaningful correlations between physicochemical properties and compound attrition, particularly toxicity-based attrition. . .However, beyond reinforcing the already established general trends concerning factors such as lipophilicity (and that none too strongly - DBL), this did not prove generally to be the case.
Nope, as the data set gets larger and better curated, these conclusions start to disappear. That, to be sure, is (as mentioned above) partly because the more recent data sets tend to be made up of compounds that are already mostly within accepted ranges for these things, but we didn't need umpteen years of upheaval to tell us that making compounds that weight 910 with logP values of 8 are less likely to be successful. Did we? Too many organizations made the understandable human mistake of thinking that changing drug candidate properties was some sort of sliding scale, that the more you moved toward the good parts, the better things got. Not so.
What comes out of this paper, then, is a realization that watching cLogP and PSA values can only take you so far, and that we've already squeezed everything out of such simple approaches that can be squeezed. Toxicology and pharmacokinetics are complex fields, and aren't going to roll over so easily. It's time for something new.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology
June 17, 2015
Why do we test new drug candidates on animals? The simple answer is that there's nothing else like an animal. There are clearly chemical and biological features of living systems that we don't yet understand, or even realize exist - the discovery of things like siRNAs is enough proof of that. So you're not going to be able to build anything from first principles; there isn't enough information. Your only hope is to put together something that matches the real thing as closely as possible, using original cells and tissues as much as possible.
The easiest way to do that, by far, is to just give your compounds to a real animal and see what happens. But you have to think carefully. Mice aren't humans, and neither are dogs (and nor are dogs mice, for that matter). Every species is different, sometimes in ways that make little difference, and sometimes in ways that can mean life or death. Animal testing is the only way to access the complexity of a living system, and the advantages of that outweigh the difficulties of figuring out the all differences when moving on to humans. But those difficulties are very real nonetheless. (One way around this would be to make animals with as many humanized tissues and systems as possible, although that's not going to make anyone any happier about testing drugs on them. The other way is try to recapitulate a living system in vitro.
But the cells in a living organ are different than the cells in a culture dish, both in ways that we understand and in ways that we don't. The architecture and systematic nature of a living organ (a pancreas, a liver) is very complex, and subject to constant regulation and change by still other systems, so taking one type of cell and growing it up in a roller bottle (or whatever) is just not going to recapitulate that. Liver cells, for example, will still do some liver-y things in culture. But not all of the things, and not all of them in the same way. And the longer they're grown in culture, the further they can diverge from their roots.
There has been a huge amount of work over the years trying to improve this situation. Growing cells in a more three-dimensional culture style is one technique, although (since we don't make blood vessels in culture tubes) there's only so far you can take that. Co-cultures, where you try to recreate the various populations of cell types in the original organ, are another. But those are tricky, too, because all the types of cell can change their behaviors in different ways under lab conditions, and their interactions can diverge as well. Every organ in a living creature is a mixture of different sorts of cells, not all of whose functions are understood by a long shot.
Ideally, you'd want to have many different such systems, and give them a chance to communicate with each other. After all, the liver (for example) is getting hit with the contents of the hepatic portal vein, full of what's been absorbed from the small intestine, and is also constantly being bathed with the blood supply from the rest of the body, whose contents are being altered by the needs of the muscles and other organs. And it's getting nerve signals from the brain along with hormonal signals from the gut and elsewhere, with all these things being balanced off against each other all the time. If you're trying to recreate a liver in a dish, you're going to have to recreate these things, or (more likely) realize that you have to fall short in some areas, and figure out what differences those shortfalls make.
The latest issue of The Economist has a look at the progress being made in these areas. The idea is to use the smallest cohorts of cells possible (these being obtained from primary human tissue), with microfluidic channels to mimic blood flow. (Here's a review from last year in Nature Biotechnology). It's definitely going to take years before these techniques are ready for the world, so when you see headlines about how University of X has made a real, working "(Organ Y) On a Chip!", you should adjust your expectations accordingly. (For one thing, no one's trying to build, say, an actual working liver just yet. These studies are all aimed at useful models, not working organs). There's a lot that has to be figured out. The materials from which you make these things, the sizes and shapes of the channels and cavities, the substitute for blood (and its flow), what nutrients, hormones, growth factors, etc. you have in the mix (and how much, and when) - there are a thousand variables to be tinkered with, and (unfortunately) hardly any of them will be independent ones.
But real progress has been made, and I have no doubt that it'll continue to be made. There's no reason, a priori, why the task should be impossible; it's just really hard. Worth the effort, though - what many people outside the field don't realize is how expensive and tricky running a meaningful animal study really is. Running a meaningful human study is, naturally, far more costly, but since the animal studies are the gatekeepers to those, you want them to be as information-rich, as reproducible, and as predictive as possible. Advanced in vitro techniques could help in all those areas, and (eventually) be less expensive besides.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Toxicology
June 10, 2015
Potential trouble: thoughts of a link between cardiac birth defects and the antidepressant Zoloft (sertraline). Pfizer recently won such a case in Missouri, but the latest trial seems to have produced some internal documents that might lead to a different verdict. Since this is all in the context of lawsuits, the signal/noise (for an outside observer) is very, very poor, on both sides. But it's worth keeping an eye on.
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
May 27, 2015
Remember back when AstraZeneca was fighting off Pfizer's ardent, tax-issue-resolving embrace a year ago? One of their weapons was a pitch to their own shareholders about what potential their own pipeline had, and how much of that would presumably go to waste should the deal go through. Even at the time, people thought that their estimates of what was to come might be a bit optimistic. But I can't really fault them, because if someone were trying to buy me, I'd probably be willing to say all kinds of things to keep it from happening, too.
Well, one of those pipeline assets has just taken a major hit. Brodulamab, targeted against the IL-17 receptor, was part of a 2012 deal between AstraZeneca and Amgen to develop inflammation therapies. Late last November, the companies announced some good clinical results in psoriasis.
But now Amgen has dropped the project, and hard.
The decision was based on events of suicidal ideation and behavior in the brodalumab program, which Amgen believes likely would necessitate restrictive labeling.
"During our preparation process for regulatory submissions, we came to believe that labeling requirements likely would limit the appropriate patient population for brodalumab," said Sean E. Harper, M.D., executive vice president of Research and Development at Amgen.
That really would be a show-stopper - psoriasis is a cruel disease, but suicide is worse. It's surprising, though, that an antibody would have this as a side effect (I'll bet it was surprising to Amgen and AZ, for sure). That's certainly a real side effect of some drugs (it was one of the big factors that scuppered rimonabant, and its competitor taranabant back when). But those were CNS agents, and that's the sort of thing you always look out for in a new CNS drug. What's an antibody to an interleukin receptor doing causing the same problem?
Well, IL-17 certainly has roles in the brain (those recent papers will lead you to others). And given how painfully little we know about what's going on up there, it's certainly possible that these pathways could lead to such a side effect - I mean, how do suicidal thoughts form, mechanistically? Right, it's a black box like all those questions are. But wouldn't brodulamab have to cross the blood-brain barrier for that to happen?
That's very unlikely for an antibody, but (as the various efforts targeting beta-amyloid show), not impossible, either. But if that's what's going on, what it is is hideous bad luck, because no one is looking for a CNS effect to stop a peripheral antibody target. And if it's somehow a peripheral mechanism, feeding back to the brain via who-knows-how, that's hideous bad luck, too. I hope that at some point we find out more about what's going on here, out of sheer scientific curiosity.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System | Toxicology
April 1, 2015
I wish that this were some sort of April Fool's entry, but it isn't. There appears to be an outside chance that Gilead's huge-selling Sovaldi (sofosbuvir) for hepatitis C has some cardiovascular problems. There have been a few reports from the field, and the FDA has asked for a label change when the drug is used in patients who have taken amiodarone. But this commentary at Medscape is arguing that the problem might be bigger.
The amiodarone interaction could be explained by inhibition of PGP, changing the pharmacokinetics of Sovaldi in some susceptible patients. (Amiodarone has odd PK and a particularly long half-life, raising the chances that you might see something). And/or there could be something intrinsic to sofosbuvir, and that's the open question (the kind of question you can really only answer once a drug's on the market). There's a large patient population taking the drug, and getting larger all the time, so if there's something out there to be seen, the adverse events should show up. But for now, Gilead is just waiting to see what happens - if anything.
+ TrackBacks (0) | Category: Infectious Diseases | Toxicology
January 20, 2015
I'm always looking out for new assays that might tell us what the heck is going on inside cells, so this paper caught my eye. The authors describe a new luciferase-based complementation assay for detecting protein-protein interactions. There are several things like this in the literature already (and for sale, too), but this one has what looks like a robust way to get the split-luciferase proteins expressed, and it seems to be picking up weaker and more transient interactions than most. For example, it's shown to pick up a specific ubiquitin ligase PPI that had been demonstrated by yeast two-hybrid assays, but never in living cells. (Depending on the signal/noise, this sensitivity could either be a bug or a feature!)
They also used this system on interactions of p53 (which has a good number of them), and found something interesting. The only-a-mother-could-love-it small molecule Nutlin 3a is believed to be an inhibitor of the p53-MDM2 interaction, but (as the current paper points out), this hasn't been conclusively demonstrated in living cells. This assay, though, confirmed that ". . .small-molecule PPI antagonists such as Nutlin-3a can selectively and rapidly disrupt preformed p53-Mdm2 complexes in living cells." But another reported PPI compound, SJ-172550, failed to show activity (its mechanism had already been reported as not just straight inhibition of the protein-protein interaction). RO-5963, another compound in this space, fared a bit better, but had a noticeably different profile than Nutlin-3a, which does argue for the ability of this assay to pick up fine details.
Stapled peptides have been used to target some of these p53 interactions as well, but conflicting data exist about just how well those work in this case. And the conflict continues: this assay showed some activity for ATSP-7041, but two other stapled peptides from the literature, SAHp53-8 and sMTide-02 (from that same ACS Chem Bio paper linked above) "exhibited no detectable ability to disrupt p53-Mdm2 or p53-Mdm4 complexes in living cells."
What they found, on further study, was that these stapled peptides seem to be cytotoxic, via some mechanism that has nothing to do with p53, and that this activity is inhibited by the presence of serum in the assay conditions. A cell-free assay system was developed, which indicated that the two problematic peptides were indeed able to disrupt the p53/MDM interactions as advertised - when they can get to it, that is. Adding serum to these assays did not disrupt things, though, which takes care of the possibility that something in serum just binds the peptides and keeps them from doing their thing.
So that leaves the serum as doing something else to keep the stapled species from actually entering the cells. What seems to be happening, from further experiments, is that the compounds are actually damaging cell membranes, which gives them a chance to get in and show activity under serum-free conditions. Adding 10% serum to the assay, though, seemed to protect the membranes from disruption (and thus makes the compounds show as inactive in the resulting cell assay). This effect was seem on plain old fibroblast cells as well, so it's not specific to cancer cells. And it wasn't seen with all stapled peptides, either - mutant forms of these very ones, for example, didn't have the same effect. Nutlin-3a didn't have it, either.
The authors suggest that this might be the source of some of the conflicting data in the literature on the effects of stapled peptide compounds, especially in this p53 area. People had noted some serum effects and cytotoxicity before, but much of this was explained via p53-dependent mechanisms. What this work shows is that the membrane damage is intrinsic to some of these peptides, and that this is going to have to be taken into account in future cell assays across the field. There are, of course, plenty of nonstapled peptides that are capable of causing membrane damage (some of them on purpose, as in natural antibiotic peptides), so this doesn't mean that stapled peptides are universally trouble. What it does mean, though, is that measurements of their penetration into cells have just gotten more complicated.
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Drug Assays | Toxicology
December 4, 2014
Eight or ten years ago, there was a good deal of excitement about non-mammalian small animal model systems for compound screening - specifically fish and frogs. More specifically, zebrafish and Xenopus. A number of small companies started up to do this sort of thing, and large companies paid attention as well. A correspondent, though, wrote me the other day noting that few (if any) of these companies seem to have made it. Phylonix and Znomics seem to be inactive, and InDanio, while apparently still with us, has a low profile. (Are there others?)
More generally, it's worth asking what's become of the whole idea. I've read some interesting papers over the years using these systems in compound screening, mostly uncovering effects developmental pathways. But how has it worked out overall? There are still people working in the field, of course, but have they been able to get any traction in drug discovery? From my perspective, I just know that I seem to hear a lot less about this than I did a few years back.
Is there still a zebrafish case to be made, or one for the (evolutionarily closer) Xenopus? If so, is that case best made for developmental biology/embryology, or for more open-ended high-content phenotypic screening, or for toxicology? Thoughts welcome.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Toxicology
November 24, 2014
Here, then, is the bottom of the drug-manufacturing barrel: the recent case in India where women at a sterilization clinic were poisoned by defective ciprofloxacin tablets. They were supposed to be getting 500mg of the antibiotic, but after several deaths, analysis has shown there there was perhaps 300mg of the actual drug present, and some zinc phosphide rat poison as well.
This is horrifying and inexplicable. Zinc phosphide is a smelly grey-to-black powder, and ciprofloxacin is white and odorless. It goes without saying that no facility processing antibiotic tablets should be preparing rodenticide as well, and there is no way that the two could be mixed short of absolutely criminal incompetence. The companies involved are two Indian generic manufacturers, Mahawar Pharmaceutical and Kavita Pharma. There are reports in the Indian press that when authorities raided the companies for samples of the drugs that a significant amount of drug material appeared to have been recently burnt.
India has major problems with corruption in its state-run health care, and there are suspicions in this case as well. (The press is also reporting that at least one of the companies has been fined for substandard or fake drugs in the recent past, which brings up the question of why the government was dealing with them now). And overall, the top end of Indian technology and medicine is something that the country can be proud of - but the bottom is a disgrace, as Indian citizens themselves are well aware.
+ TrackBacks (0) | Category: The Dark Side | Toxicology
November 20, 2014
There's a lot of effort (and a lot of money) going into targeted nanoparticle drug delivery. And that's completely understandable, because the way we dose things now, with any luck, will eventually come to seem primitive. So you used to just have people eat the compound, did you, or just poke it into their bloodstream with a sharp stick, and let it float around wherever it would and hope that it made it to the target without doing too much else? Quaint.
The nanoparticle idea, on the other hand, is to encapsulate the drug somehow in the layers of these tiny particles which will release it only under the right conditions. The outermost layers, meanwhile, are meant to be coated in ways (recognition peptides, usually) that send the payload to only the right cell types. Imagine a drug for lung cancer where all of the dose goes to the lungs, and all of it hits only the cancerous cells. You could put in the roughest, toughest chemotherapy agents available, because you wouldn't be stuck with poisoning the rest of the patient's body at a slightly slower rate than the cancer, which is how it works too much of the time now.
But that level of control is yet to come. We just got another read on this in the clinical results from Bind Therapeutics, one of the leading companies in this field. Bind is another Bob Langer-derived company - when other parts of the US (or other countries) talk about wanting to have humming biotech hubs of their own, they'd be happy just to have Bob Langer. Bind, under CEO Scott Minick, has deals with an impressive list of big pharma companies to try to apply their nanoparticle delivery systems to existing drugs, although Amgen pulled out of an arrangement with them over the summer.
That didn't help the stock, and neither did the latest news. This was a Phase II study in non-small-cell lung cancer patients with docetaxel, a widely used chemotherapy drug that could certainly use some targeted delivery. The results were mixed. Investors were clearly hoping for something better, but it could have been much worse. As that FierceBiotech link above details, the company saw some responders when the new formulation was dosed every three weeks, but not when it was dosed every week, an interesting result that's going to take some thinking about. Inside the every-three-weeks group, the patients with two particular tumor varieties (KRAS or squamous cell carcinoma) seemed to show relatively good responses. But the sample sizes there are small.
The company is planning another round of Phase II, concentrating on those subtypes and dropping the once-a-week dose. That's exactly what you do in Phase II: the drug has hit the real world with real patients in it, and you do whatever seems to work. It would have been great if they'd seen a bigger across-the-board response, but these are the early days of targeted nanoparticles. There's a vast amount we don't know about these things; the odds are huge that no one is going to be hitting any balls over any fences for a while yet. Bind's next trial should tell them, though, if their current docetaxel particle idea is worthwhile for NSCLC.
That could go either way. The current trial may turn out to have lit up just the sorts of patients who will go on to show impressive benefits, or those effects could just flatten out and slide back into the statistical swamp. Here it is, the absolute essence of drug discovery: there is no way to know in advance. The only way to find out is to round up some more patients, round up some more drug, and round up some more money and try it. Good luck to them!
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Pharmacokinetics | Toxicology
October 2, 2014
Clinical trial failure rates are killing us in this industry. I don't think there's much disagreement on that - between the drugs that just didn't work (wrong target, wrong idea) and the ones that turn out to have unexpected safety problems, we incinerate a lot of money. An earlier, cheaper read on either of those would transform drug research, and people are willing to try all sorts of things to those ends.
One theory on drug safety is that there are particular molecular properties that are more likely to lead to trouble. There have been several correlations proposed between high logP (greasiness) and tox liabilities, multiple aromatic rings and tox, and so on. One rule proposed in 2008 by a group at Pfizer is that clogP >3 and total polar surface area less than 75 square angstroms is a good cutoff - compounds on the other side of it are about 2.5 times more likely to run into trouble. But here's a paper in MedChemComm that asks if any of this has any validity:
What is the likelihood of real success in avoiding attrition due to toxicity/safety from using such simple metrics? As mentioned in the beginning, toxicity can arise from a wide variety of reasons and through a plethora of complex mechanisms similar to some of the DMPK endpoints that we are still struggling to avoid. In addition to the issue of understanding and predicting actual toxicity, there are other hurdles to overcome when doing this type of historical analysis that are seldom discussed.
The first of these is making sure that you're looking at the right set of failed projects - that is, ones that really did fail because of unexpected compound-associated tox, and not some other reason (such as unexpected mechanism-based toxicity, which is another issue). Or perhaps a compound could have been good enough to make it on its own under other circumstances, but the competitive situation made it untenable (something else came up with a cleaner profile at about the same time). Then there's the problem of different safety cutoffs for different therapeutic areas - acceptable tox for a pancreatic cancer drug will not cut it for type II diabetes, for example.
The authors did a thorough study of 130 AstraZeneca development compounds, with enough data to work out all these complications. (This is the sort of thing that can only be done from inside a company's research effort - you're never going to have enough information, working from outside). What they found, right off, was that for this set of compounds the Pfizer rule was completely inverted. The compounds on the too-greasy side actually had shown fewer problems (!) The authors looked at the data sets from several different angles, and conclude that the most likely explanation is that the rule is just not universally valid, and depends on the dataset you start with.
The same thing happens when you look at the fraction of sp3 carbons, which is a characteristic (the "Escape From Flatland" paper) that's also been proposed to correlate with tox liabilities. The AZ set shows no such correlation at all. Their best hypothesis is that this is a likely correlation with pharmacokinetics that has gotten mixed in with a spurious correlation with toxicity (and indeed, the first paper on this trend was only talking about PK). And finally, they go back to an earlier properties-based model published by other workers at AstraZeneca, and find that it, too, doesn't seem to hold up on the larger, more curated data set. Their-take home message: ". . .it is unlikely that a model of simple physico-chemical descriptors would be predictive in a practical setting."
Even more worrisome is what happens when you take a look at the last few years of approved drugs and apply such filters to them (emphasis added):
To investigate the potential impact of following simple metric guidelines, a set of recently approved drugs was classified using the 3/75 rule (Table 3). The set included all small molecule drugs approved during 2009–2012 as listed on the ChEMBL website. No significant biases in the distribution of these compounds can be seen from the data presented in Table 3. This pattern was unaffected if we considered only oral drugs (45) or all of the drugs (63). The highest number of drugs ends up in the high ClogP/high TPSA class and the class with the lowest number of drugs is the low ClogP/low TPSA. One could draw the conclusion that using these simplistic approaches as rules will discard the development of many interesting and relevant drugs.
One could indeed. I hadn't seen this paper myself until the other day - a colleague down the hall brought it to my attention - and I think it deserves wider attention. A lot of drug discovery organizations, particularly the larger ones, use (or are tempted to use) such criteria to rank compounds and candidates, and many of us are personally carrying such things around in our heads. But if these rules aren't valid - and this work certainly makes it look as if they aren't - then we should stop pretending as if they are. That throws us back into a world where we have trouble distinguishing troublesome compounds from the good ones, but that, it seems, is the world we've been living in all along. We'd be better off if we just admitted it.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Toxicology
September 3, 2014
A reader sends along this query, and since I've never worked around monoclonal antibodies, I thought I'd ask the crowd: how much of a read on safety do you get with a mAb in Phase I? How much Phase I work would one feel necessary to feel safe going on to Phase II, from a tox/safety standpoint? Any thoughts are welcome. I suspect the answer is greatly going to depend on what said antibody is being raised to target.
+ TrackBacks (0) | Category: Drug Development | Toxicology
August 21, 2014
So here's a question for the medicinal chemists: how come we don't like bromoaromatics so much? I know I don't, but I have trouble putting my finger on just why. I know that there's a ligand efficiency argument to be made against them - all that weight, for one atom - but there are times when a bromine seems to be just the thing. There certainly are such structures in marketed drugs. Some of the bad feelings around them might linger from the sense that it's sort of unnatural element, as opposed to chlorine, which in the form of chloride is everywhere in living systems.
But bromide? Well, for what it's worth, there's a report that bromine may in fact be an essential element after all. That's not enough to win any arguments about putting it into your molecules - selenium's essential, too, and you don't see people cranking out the organoselenides. But here's a thought experiment: suppose you have two drug candidate structures, one with a chlorine on an aryl ring and the other with a bromine on the same position. If they have basically identical PK, selectivity, preliminary tox, and so on, which one do you choose to go on with? And why?
If you chose the chloro derivative (and I think that most medicinal chemists instinctively would, for just the same hard-to-articulate reasons we're talking about), then what split in favor of the bromo compound would be enough to make you favor it? How much more activity, PK coverage, etc. do you need to make you willing to take a chance on it instead?
+ TrackBacks (0) | Category: Drug Development | Odd Elements in Drugs | Pharmacokinetics | Toxicology
July 18, 2014
There's a new report in the literature on the mechanism of thalidomide, so I thought I'd spend some time talking about the compound. Just mentioning the name to anyone familiar with its history is enough to bring on a shiver. The compound, administered as a sedative/morning sickness remedy to pregnant women in the 1950s and early 1960s, famously brought on a wave of severe birth defects. There's a lot of confusion about this event in the popular literature, though - some people don't even realize that the drug was never approved in the US, although this was a famous save by the (then much smaller) FDA and especially by Frances Oldham Kelsey. And even those who know a good amount about the case can be confused by the toxicology, because it's confusing: no phenotype in rats, but big reproductive tox trouble in mice and rabbits (and humans, of course). And as I mentioned here, the compound is often used as an example of the far different effects of different enantiomers. But practically speaking, that's not the case: thalidomide has a very easily racemized chiral center, which gets scrambled in vivo. It doesn't matter if you take the racemate or a pure enantiomer; you're going to get both of the isomers once it's in circulation.
The compound's horrific effects led to a great deal of research on its mechanism. Along the way, thalidomide itself was found to be useful in the treatment of leprosy, and in recent years it's been approved for use in multiple myeloma and other cancers. (This led to an unusual lawsuit claiming credit for the idea). It's a potent anti-angiogenic compound, among other things, although the precise mechanism is still a matter for debate - in vivo, the compound has effects on a number of wide-ranging growth factors (and these were long thought to be the mechanism underlying its effects on embryos). Those embryonic effects complicate the drug's use immensely - Celgene, who got it through trials and approval for myeloma, have to keep a very tight patient registry, among other things, and control its distribution carefully. Experience has shown that turning thalidomide loose will always end up with someone (i.e. a pregnant woman) getting exposed to it who shouldn't be - it's gotten to the point that the WHO no longer recommends it for use in leprosy treatment, despite its clear evidence of benefit, and it's down to just those problems of distribution and control.
But in 2010, it was reported that the drug binds to a protein called cereblon (CRBN), and this mechanism implicated the ubiquitin ligase system in the embryonic effects. That's an interesting and important pathway - ubiquitin is, as the name implies, ubiquitous, and addition of a string of ubiquitins to a protein is a universal disposal tag in cells: off to the proteosome, to be torn to bits. It gets stuck onto exposed lysine residues by the aforementioned ligase enzyme.
But less-thorough ubiquitination is part of other pathways. Other proteins can have ubiquitin recognition domains, so there are signaling events going on. Even poly-ubiquitin chains can be part of non-disposal processes - the usual oligomers are built up using a particular lysine residue on each ubiquitin in the chain, but there are other lysine possibilities, and these branch off into different functions. It's a mess, frankly, but it's an important mess, and it's been the subject of a lot of work over the years in both academia and industry.
The new paper has the crystal structure of thalidomide (and two of its analogs) bound to the ubiquitin ligase complex. It looks like they keep one set of protein-protein interactions from occurring while the ligase end of things is going after other transcription factors to tag them for degradation. Ubiquitination of various proteins could be either up- or downregulated by this route. Interestingly, the binding is indeed enantioselective, which suggests that the teratogenic effects may well be down to the (S) enantiomer, not that there's any way to test this in vivo (as mentioned above). But the effects of these compounds in myeloma appear to go through the cereblon pathway as well, so there's never going to be a thalidomide-like drug without reproductive tox. If you could take it a notch down the pathway and go for the relevant transcription factors instead, post-cereblon, you might have something, but selective targeting of transcription factors is a hard row to hoe.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Chemical News | Toxicology
July 2, 2014
Yesterday's link to the comprehensive list of chemical-free products led to some smiles, but also to some accusations of preaching to the choir, both on my part and on the part of the paper's authors. A manuscript mentioned in the blog section of Nature Chemistry is certainly going to be noticed mostly by chemists, naturally, so I think that everyone responsible knows that this is mainly for some comic relief, rather than any sort of serious attempt to educate the general public. Given the constant barrage of "chemical-free" claims, and what that does to the mood of most chemists who see them, some comedy is welcome once in a while.
But the larger point stands. The commenters here who said, several times, that chemists and the public mean completely different things by the word "chemical" have a point. But let's take a closer look at this for a minute. What this implies (and implies accurately, I'd say) is that for many nonscientists, "chemical" means "something bad or poisonous". And that puts chemists in the position of sounding like they're arguing from the "No True Scotsman" fallacy. We're trying to say that everything is a chemical, and that they range from vital to harmless to poisonous (at some dose) and everything in between. But this can sound like special pleading to someone who's not a scientist, as if we're claiming all the good stuff for our side and disavowing the nasty ones as "Not the kind of chemical we were talking about". (Of course, the lay definition of chemical does this, with the sign flipped: the nasty things are "chemicals", and the non-nasty ones are. . .well, something else. Food, natural stuff, something, but not a chemical, because chemicals are nasty).
So I think it's true that approaches that start off by arguing the definition of "chemical" are doomed. It reminds me of something you see in online political arguments once in a while - someone will say something about anti-Semitism in an Arab country, and likely as not, some other genius will step in with the utterly useless point that it's definitionally impossible, you see, for an Arab to be an anti-Semite, because technically the Arabs are also a Semitic people! Ah-hah! What that's supposed to accomplish has always been a mystery to me, but I fear that attempts to redefine that word "chemical" are in the same category, no matter how teeth-grinding I find that situation to be.
The only thing I've done in this line, when discussing this sort of thing one-on-one, is to go ahead and mention that to a chemist, everything that's made out of atoms is pretty much a "chemical", and that we don't use the word to distinguish between the ones that we like and the ones that we don't. I've used that to bring up the circular nature of some of the arguments on the opposite side: someone's against a chemical ingredient because it's toxic, and they know it's toxic because it's a chemical ingredient. If it were "natural", things would be different.
That's the point to drop in the classic line about cyanide and botulism being all-natural, too. You don't do that just to score some sort of debating point, though, satisfying though that may be - I try not to introduce that one with a flourish of the sword point. No, I think you want to come in with a slightly regretful "Well, here's the problem. . ." approach. The idea, I'd say, is to introduce the concept of there being a continuum of toxicity out there, one that doesn't distinguish between man-made compounds and natural ones.
The next step after that is the fundamental toxicological idea that the dose makes the poison, but I think it's only effective to bring that up after this earlier point has been made. Otherwise, it sounds like special pleading again: "Oh, well, yeah, that's a deadly poison, but a little bit of it probably won't hurt you. Much." My favorite example in this line is selenium. It's simultaneously a vital trace nutrient and a poison, all depending on the dose, and I think a lot of people might improve their thinking on these topics if they tried to integrate that possibility into their views of the world.
Because it's clear that a lot of people don't have room for it right now. The common view is that the world is divided into two categories of stuff: the natural, made by living things, and the unnatural, made by humans (mostly chemists, dang them). You even see this scheme applied to inorganic chemistry: you can find people out there selling makeup and nutritional supplements who charge a premium for things like calcium carbonate when it's a "natural mineral", as opposed (apparently) to that nasty sludge that comes out of the vats down at the chemical plant. (This is also one of the reasons why arguing about the chemist's definition of "organic" is even more of a losing position than arguing about the word "chemical").
There's a religious (or at least quasi-religious) aspect to all this, which makes the arguments emotional and hard to win by appeals to reason. That worldview I describe is a dualist, Manichean one: there are forces of good, and there are forces of evil, and you have to choose sides, don't you? It's sort of assumed that the "natural" world is all of a piece: living creatures are always better off with natural things. They're better; they're what living creatures are meant to consume and be surrounded by. Anything else is ersatz, a defective substitute for the real thing, and quite possibly an outright work of evil by those forces on the other side.
Note that we're heading into some very deep things in many human cultures here, which is another reason that this is never an easy or simple argument to have. That split between natural and unnatural means that there was a time, before all this industrial horror, when people lived in the natural state. They never encountered anything artificial, because there was no such thing in the world. Now, a great number of cultures have a "Golden Age" myth, that distant time when everything was so much better - more pure, somehow, before things became corrupted into their present regrettable state. The Garden of Eden is the aspect this takes in the Christian religion, but you find similar things in many other traditions. (Interestingly, this often takes the form of an ancient age when humans spoke directly with the gods, in whatever form they took, which is one of the things that led Julian Jaynes to his fascinating, although probably unprovable hypotheses in The Origin of Consciousness in the Breakdown of the Bicameral Mind).
This Prelapsarian strain of thinking permeates the all-natural chemical-free worldview. There was a time when food and human health were so much better, and industrial civilization has messed it all up. We're surrounded by man-made toxins and horrible substitutes for real food, and we've lost the true path. It's no wonder that there's all this cancer and diabetes and autism and everything: no one ever used to get those things. Note the followup to this line of thought: someone did this to us. The more hard-core believers in this worldview are actually furious at what they see as the casual, deliberate poisoning of the entire population. The forces of evil, indeed.
And there are enough small reinforcing bars of truth to make all of this hold together quite well. There's no doubt that industrial poisons have sickened vast numbers of people in the past: mercury is just the first one that's come to mind. (I'm tempted to point out that mercury and its salts, by the standards of the cosmetics and supplements industries, are most certainly some of those all-natural minerals, but let that pass for now). We've learned more about waste disposal, occupational exposure, and what can go into food, but there have been horrible incidents that live on vividly in the imagination. And civilization itself didn't necessarily go about increasing health and lifespan for quite a while, as the statistics assembled in Gregory Clark's A Farewell to Alms make clear. In fact, for centuries, living in cities was associated with shorter lifespans and higher mortality. We've turned a lot of corners, but it's been comparatively recently.
And on the topic of "comparatively recently", there's one more factor at work that I'd like to bring up. The "chemical free" view of the world has the virtue of simplicity (and indeed, sees simplicity as a virtue itself). Want to stay healthy? Simple. Don't eat things with chemicals in them. Want to know if something is the right thing to eat, drink, wear, etc.? Simple: is it natural or not? This is another thing that makes some people who argue for this view so vehement - it's not hard, it's right in front of you, and why can't you see the right way of living when it's so, so. . .simple? Arguing against that, from a scientific point of view, puts a person at several disadvantages. You necessarily have to come in with all these complications and qualifying statements, trying to show how things are actually different than they look. That sounds like more special pleading, for one thing, and it's especially ineffective against a way of thinking that often leans toward thinking that the more direct, simple, and obvious something is, the more likely it is to be correct.
That's actually the default way of human thinking, when you get down to it, which is the problem. Science, and the scientific worldview, are unnatural things, and I don't mean that just in the whole-grain no-additives sense of "natural". I mean that they do not come to most people as a normal consequence of their experience and habits of thought. A bit of it does: "Hey, every time I do X, Y seems to happen". But where that line of thinking takes you starts to feel very odd very quickly. You start finding out that the physical world is a lot more complicated than it looks, that "after" does not necessarily mean "because", and that all rules of thumb break down eventually (and usually without warning). You find that math, of all things, seems to be the language that the universe is written in (or at least a very good approximation to it), and that's not exactly an obvious concept, either. You find that many of the most important things in that physical world are invisible to our senses, and not necessarily in a reassuring way, or in a way that even makes much sense at all at first. (Magical explanations of invisible forces at least follow human intuitions). It's no wonder that scientific thinking took such a long, long time to ever catch on in human history. I still sometimes think that it's only tolerated because it brings results.
So there are plenty of reasons why it's hard to effectively argue against the all-natural chemical-free worldview. You're asking your audience to accept a number of things that don't make much sense to them, and what's worse, many of these things look like rhetorical tricks at best and active (even actively evil) attempts to mislead them at worst. And all in the service of something that many of them are predisposed to regard as suspicious even from the start. It's uphill all the way.
+ TrackBacks (0) | Category: General Scientific News | Snake Oil | Toxicology
June 23, 2014
Here's one of those "Drug Discovery of. . .the. . .Future-ure-ure-ure" articles in the popular press. (I need a reverb chamber to make that work property). At The Atlantic, they're talking with "medical futurists" and coming up with this:
The idea is to combine big data and computer simulations—the kind an engineer might use to make a virtual prototype of a new kind of airplane—to figure out not just what's wrong with you but to predict which course of treatment is best for you. That's the focus of Dassault Systèmes, a French software company that's using broad datasets to create cell-level simulations for all different kinds of patients. In other words, by modeling what has happened to patients like you in previous cases, doctors can better understand what might happen if they try certain treatments for you—taking into consideration your age, your weight, your gender, your blood type, your race, your symptom, any number of other biomarkers. And we're talking about a level of precision that goes way beyond text books and case studies.
I'm very much of two minds about this sort of thing. On the one hand, the people at Dassault are not fools. They see an opportunity here, and they think that they might have a realistic chance at selling something useful. And it's absolutely true that this is, broadly, the direction in which medicine is heading. As we learn more about biomarkers and individual biochemistry, we will indeed be trying to zero in on single-patient variations.
But on that ever-present other hand, I don't think that you want to make anyone think that this is just around the corner, because it's not. It's wildly difficult to do this sort of thing, as many have discovered at great expense, and our level of ignorance about human biochemistry is a constant problem. And while tailoring individual patient's therapies with known drugs is hard enough, it gets really tricky when you talk about evaluating new drugs in the first place:
Charlès and his colleagues believe that a shift to virtual clinical trials—that is, testing new medicines and devices using computer models before or instead of trials in human patients—could make new treatments available more quickly and cheaply. "A new drug, a succesful drug, takes 10 to 12 years to develop and over $1 billion in expenses," said Max Carnecchia, president of the software company Accelrys, which Dassault Systèmes recently acquired. "But when it is approved by FDA or other government bodies, typically less than 50 percent of patients respond to that therapy or drug." No treatment is one-size-fits-all, so why spend all that money on a single approach?
Carnecchia calls the shift toward algorithmic clinical trials a "revolution in drug discovery" that will allow for many quick and low-cost simulations based on an endless number of individual cellular models. "Those models start to inform and direct and focus the kinds of clinical trials that have historically been the basis for drug discovery," Carnecchia told me. "There's the benefit to drug companies from reduction of cost, but more importantly being able to get these therapies out into the market—whether that's saving lives or just improving human health—in such a way where you start to know ahead of time whether that patient will actually respond to that therapy."
Speed the day. The cost of clinical trials, coupled with their low success rate, is eating us alive in this business (and it's getting worse every year). This is just the sort of thing that could rescue us from the walls that are closing in more tightly all the time. But this talk of shifts and revolutions makes it sound as if this sort of thing is happening right now, which it isn't. No such simulated clinical trial, one that could serve as the basis for a drug approval, is anywhere near even being proposed. How long before one is, then? If things go really swimmingly, I'd say 20 to 25 years from now, personally, but I'd be glad to hear other estimates.
To be fiar, the article does go on to mentions something like this, but it just says that "it may be a while" before said revolution happens. And you get the impression that what's most needed is some sort of "cultural shift in medicine toward openness and resource sharing". I don't know. . .I find that when people call for big cultural shifts, they're sometimes diverting attention (even their own attention) from the harder parts of the problem under discussion. Gosh, we'd have this going in no time if people would just open up and change their old-fashioned ways! But in this case, I still don't see that as the rate-limiting step at all. Pouring on the openness and sharing probably wouldn't hurt a bit in the quest for understanding human drug responses and individual toxicology, but it's not going to suddenly open up any blocked-up floodgates, either. We don't know enough. Pooling our current ignorance can only take us so far.
Remember there are hundreds of billions of dollars waiting to be picked up off the ground by anyone who can do these things. It's not like there are no incentives to find ways to make clinical trials faster and cheaper. Anything that gives the impression that there's this one factor (lack of cooperation, too much regulation, Evil Pharma Executives, what have you) holding us back from the new era, well. . .that just might be an oversimplified view of the situation.
+ TrackBacks (0) | Category: Clinical Trials | In Silico | Regulatory Affairs | Toxicology
June 11, 2014
I noticed some links to this post showing up on my Twitter feed over the weekend, and I wanted to be sure to mention it. There's a recipe for "all-natural" herbicide that goes around Facebook, etc., where you mix salt, vinegar, and bit of soap, so Andrew Kniss sits down and does some basic toxicology versus glyphosate. The salt-and-vinegar mix will work, it seems, especially on small weeds, but it's more persistent in the soil and its ingredients have higher mammalian toxicity (which I'm pretty sure is the opposite of what people expect).
I hope this one makes a few people think, but I always wonder. The sorts of people who need this most are the ones least likely the read it, and the ones most likely to immediately discount it as "Monsanto shill propaganda" or the like. I had email like that last time I wrote about glyphosate (the second link above) - people asking me how much Monsanto was paying me and so on. And these people are also not interested in hearing about any LD50 data (which they probably assume is all faked, anyway). They're ready to tell you about long-term cancer and everything else (not that there's any evidence for that, either).
Going after this sort of thing is a duty, but an endless chore. I was also sent a link to an interview with some actress where she talks about her all-natural beauty regimen - so pure and green and holistic, and so very expensive, from what I could see. One of the things she advocated was clay. No, not for your skin. To eat it. It has, she explained, "negative charge" so it picks up "negative isotopes". Yeah boy. You'll have heard of those, maybe the last time you were And of course, it also picks up all those heavy metal toxins your body is swimming in, which is why a friend of hers told her that she tried the clay, and like, when she went to the bathroom it like, smelled like metal. I am not making any of this up. A few comments on that site, gratifyingly, wondered if there was any actual evidence for that clay stuff, but most of them were just having spasms of delight over the whole thing (and trading obscure, expensive sources for the all-natural lifestyle). So there's a lot of catching up to do.
+ TrackBacks (0) | Category: Chemical News | Snake Oil | Toxicology
March 25, 2014
Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.
Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.
What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.
The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.
There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology
February 11, 2014
There's been a report on the toxicity of various pesticides in the literature suggesting that they're far more toxic to human cells than had been thought. My eyebrows went up a bit when I heard this, because these sorts of assays had been done many times before. Then I realized that this was another paper from the Séralini group, and unfortunately, that alone is enough to account for the variance.
Update: commentors to this post have noted that the cell culture conditions used in the paper are rather unusual. Specifically, they're serum-free during the testing period, which puts the cells under stress to begin with. There's also the general problem, which others have brought up, about what it means to dispense these things directly onto cell cultures in diluted DMSO, since that's rather far from how they're going to be presented in the real world. Cell assays get run like that in the drug industry, to be sure, but you've got to be very careful drawing toxicological or other whole-animal conclusions from them. And we already have whole-animal studies on these formulations, don't we? I mean, juiced broccoli straight from the organic farmer's market might well have similar effects under these conditions.
Here's a story from Science with more background. Seralini is the guy who made headlines a couple of years ago with another report that genetically modified corn caused tumors in rodents, but that one was so poorly run and poorly controlled that its conclusions (which have not been seen in any other study) cannot be taken seriously. That's Séralini's problem right there: from all appearances, he's a passionate advocate for his positions, and he appears to be ready to go with whatever results line up with his beliefs. This is human nature, for sure, but science is about trying to work past those parts of human nature. The key is to keep the curious, inquisitive side, and correct for the confirmation bias I-know-I'm-right side. At this point, even if Séralini were to discover something real (and really worth taking seriously), it would have a hard time gaining acceptance, because his previous papers have been so unreliably over-the-top.
I'm not the only person who thinks that. An editor of the journal this latest Seralini paper appeared in has actually resigned because it got published:
When Ralf Reski read the latest paper from controversial French biologist Gilles-Eric Séralini, he quickly decided he wanted nothing to do with it. Séralini’s report in BioMed Research International describes how pesticides kill cultured human cells, with the hair-raising conclusion that pesticides may be vastly more toxic than assumed by regulatory authorities. Some scientists are criticizing the findings as neither surprising nor significant—but they have touched off a firestorm, with environmental groups calling for changes in how pesticides are regulated. That was too much for Reski. Within hours of reading the paper last week, the plant scientist at the University of Freiburg in Germany resigned as an editor of the journal and asked for his name to be removed from its website. "I do not want to be connected to a journal that provides [Séralini] a forum for such kind of agitation," he wrote in his resignation e-mail to the publisher, Hindawi Publishing Corporation.
Should pesticide toxicity be a subject of investigation? Absolutely. Should people be alert to assays that have not been run that should be investigated? Definitely. Are there things that we don't know about pesticide exposure that we should? I would certainly think so. But Séralini's history makes him (scientifically) one of the least effective people to be working on these questions. As a headline-grabber, though, he's pretty efficient. Which I suspect is the real point. If you're sure you're right, any weapon you can pick up is a good one.
+ TrackBacks (0) | Category: The Scientific Literature | Toxicology
January 30, 2014
This morning I heard reports of formaldehyde being found in Charleston, West Virginia water samples as a result of the recent chemical spill there. My first thought, as a chemist, was "You know, that doesn't make any sense". A closer look confirmed that view, and led me to even more dubious things about this news story. Read on - there's some chemistry for a few paragraphs, and then near the end we get to the eyebrow-raising stuff.
The compound that spilled was (4-methylcyclohexane)methanol, abbreviated as 4-MCHM. That's its structure over there.
For the nonchemists in the audience, here's a chance to show how chemical nomenclature works. Those lines represent bonds between atoms, and if the atom isn't labeled with its own letter, it's a carbon (this compound has one one labeled atom, that O for oxygen). These sorts of carbons take four bonds each, and that means that there are a number of hydrogens bonded to them that aren't shown. You'd add one, two, or three hydrogens as needed to each to take each one up to four bonds.
The six-membered ring in the middle is "cyclohexane" in organic chemistry lingo. You'll note two things coming off it, at opposite ends of the ring. The small branch is a methyl group (one carbon), and the other one is a methyl group subsituted with an alcohol (OH). The one-carbon alcohol compound (CH3OH) is methanol, and the rules of chemical naming say that the "methanol-like" part of this structure takes priority, so it's named as a methanol molecule with a ring stuck to its carbon. And that ring has another methyl group, which means that its position needs to be specified. The ring carbon that has the "methanol" gets numbered as #1 (priority again), so the one with the methyl group, counting over, is #4. So this compound's full name is (4-methylcyclohexane)methanol.
I went into that naming detail because it turns out to be important. This spill, needless to say, was a terrible thing that never should have happened. Dumping a huge load of industrial solvent into a river is a crime in both the legal and moral senses of the word. Early indications are that negligence had a role in the accident, which I can easily believe, and if so, I hope that those responsible are prosecuted, both for justice to be served and as a warning to others. Handling industrial chemicals involves a great deal of responsibility, and as a working chemist it pisses me off to see people doing it so poorly. But this accident, like any news story involving any sort of chemistry, also manages to show how little anyone outside the field understands anything about chemicals at all.
I say that because among the many lawsuits being filed, there are some that show (thanks, Chemjobber!) that the lawyers appear to believe that the chemical spill was a mixture of 4-methylcyclohexane and methanol. Not so. This is a misreading of the name, a mistake that a non-chemist might make because the rest of the English language doesn't usually build up nouns the way organic chemistry does. Chemical nomenclature is way too logical and cut-and-dried to be anything like a natural language; you really can draw a complex compound's structure just by reading its name closely enough. This error is a little like deciding that a hairdryer must be a device made partly out of hair.
I'm not exaggerating. The court filing, by the law firm of Thompson and Barney, says explicitly:
30. The combination chemical 4-MCHM is artificially created by combining methylclyclohexane (sic) with methanol.
31. Two component parts of 4-MCHM are methylcyclohexane and methanol which are both known dangerous and toxic chemicals that can cause latent dread disease such as cancer.
Sure thing, guys, just like the two component parts of dogwood trees are dogs and wood. Chemically, this makes no sense whatsoever. Now, it's reasonable to ask if 4-MCHM can chemically degrade to methanol and 4-methylcyclohexane. Without going into too much detail, the answer is "No". You don't get to break carbon-carbon bonds that way, not without a lot of energy. If you ran the chemical (at high temperature) through some sort of catalytic cracking reactor at an oil refinery, you might be able to get something like that to happen (although I'd expect other things as well, probably all at the same time), but otherwise, no. For the same sorts of reasons, you're not going to be able to get formaldehyde out of this compound, either, not without similar conditions. Air and sunlight and water aren't going to do it, and if bacteria and fungi metabolize it, I'd expect things like (4-methylcyclohexane)carboxaldehyde and (4-methylcyclohexane)carboxylic acid, among others. I would not expect them to break off that single-carbon alcohol as formaldehyde.
So where does all this talk of formaldehyde come from? Well, one way that formaldehyde shows up is from oxidation of methanol, as shown in that reaction (this time I've drawn in all the hydrogens). This is, in fact, one of the reasons that methanol is toxic. In the body, it gets oxidized to formaldehyde, and that gets oxidized right away to formic acid, which shuts down an important enzyme. Exposure to formaldehyde itself is a different problem. It's so reactive that most cancers associated with exposure to it are in the upper respiratory tract; it doesn't get any further.
As that methanol oxidation reaction pathway shows, the body actually has ways of dealing with formaldehyde exposure, up to a point. In fact, it's found at low levels (around 20 to 30 nanograms/milliliter) in things like tomatoes and oranges, so we can assume that these exposure levels are easily handled. I am not aware of any environmental regulations on human exposure to orange juice or freshly cut tomatoes. So how much formaldehyde did Dr. Scott Simonton find in his Charleston water sample? Just over 30 nanograms per milliliter. Slightly above the tomato-juice level (27 ng/mL). For reference, the lowest amount that can be detected is about 6 ng/mL. Update: and the amount of formaldehyde in normal human blood is about 1 microgram/mL, which is over thirty times the levels that Simonton says he found in his water samples. This is produced by normal human metabolism (enzymatic removal of methyl groups and other reactions). Everyone has it. And another update: the amount of formaldehyde in normal human saliva can easily be one thousand times that in Simonton's water samples, especially in people who smoke or have cavities. If you went thousands of miles away from this chemical spill, found an untouched wilderness and had one of its natives spit in a collection vial, you'd find a higher concentration of formaldehyde.
But Simonton is a West Virginia water quality official, is he not? Well, not in this capacity. As this story shows, he is being paid in this matter by the law firm of Thompson and Barney to do water analysis. Yes, that's the same law firm that thinks that 4-MCHM is a mixture with methanol in it. And the water sample that he obtained was from the Vandalia Grille in Charleston, the owners of which are defendants in that Thompson and Barney lawsuit that Chemjobber found.
So let me state my opinion: this is a load of crap. The amounts of formaldehyde that Dr. Simonton states he found are within the range of ozonated drinking water as it is, and just above those of fresh tomato juice. These are levels that have never been shown to be harmful in humans. His statements about cancer and other harm coming to West Virginia residents seem to me to be irresponsible fear-mongering. The sort of irresponsible fear-mongering that someone might do if they're being paid by lawyers who don't understand any chemistry and are interested in whipping up as much panic as they can. Just my freely offered opinions. Do your own research and see what you think.
Update: I see that actual West Virginia public health officials agree.
Another update: I've had people point out that the mixture that spilled may have contained up to 1% methanol. But see this comment for why this probably doesn't have any bearing on the formaldehyde issue. Update, Jan 31: Here's the MSDS for the "crude MHCM" that was spilled. The other main constituent (4-methoxymethylcyclohexane)methanol is also unlikely to produce formaldehyde, for the same reasons given above. The fact remains that the levels reported (and sensationalized) by Dr. Simonton are negligible by any standard.
+ TrackBacks (0) | Category: Chemical News | Current Events | Press Coverage | Toxicology
December 5, 2013
I've been meaning to link to this piece by Lauren Wolf in C&E News on the connections between Parkinson's disease and environmental exposure to mitochondrial toxins. (PDF version available here). Links between environmental toxins and disease are drawn all the time, of course, sometimes with very good reason, but often when there seems to be little evidence. In this case, though, since we have the incontrovertible example of MPTP to work from, things have to be taken seriously. Wolf's article is long, detailed, and covers a lot of ground.
The conclusion seems to be that some people may well be genetically more susceptible to such exposures. A lot of people with Parkinson's have never really had much pesticide exposure, and a lot of people who've worked with pesticides never show any signs of Parkinson's. But there could well be a vulnerable population that bridges these two.
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
October 30, 2013
The topic of the various "accelerated review" options at the FDA has come up here before. Last month JAMA ran an opinion piece suggesting that the agency has gone too far. (Here's the Pharmalot take on the article). This, of course, is the bind the agency is always in. Similar to the narrow window with an anticoagulant drug (preventing clots versus encouraging hemorrhages), the FDA is constantly getting complaints that they're stifling innovation by setting regulatory barriers too high, and that they're killing patients by letting too many things through. It's an unwinnable situation - under what conditions could neither camp feel wronged?
The FDA defended its review procedures at the time, but now (according to BioCentury Extra, a more emphatic statement has been made:
FDA's Richard Pazdur, director of CDER's Office of Hematology & Oncology Products (OHOP), made it clear at an FDA briefing on personalized medicine on Monday that the agency is willing to take risks to get drugs for serious and life-threatening diseases to patients quickly. Pazdur said, "If we are taking appropriate risks in accelerated approval, some drugs will come off market, some will have restricted labeling." If that doesn't ever happen, "we probably aren't taking the appropriate risks," he said.
It reminds me of the advice that if your manuscripts are all getting accepted, then you aren't sending them to good enough journals. I agree with Pazdur on this one, and I wish that this attitude was more widely circulated and understood. Every new drug is an experimental medication. No clinical trial is ever going to tell us as much as we want to know about how a drug will perform in the real world, because there is no substitute and no model for the real world. (Anyone remember the old Steven Wright joke about how he'd just bought a map of the US - actual size? Down in the corner, it says "One mile equals one mile".)
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs | Toxicology
October 18, 2013
Ariad's Inclusig (ponatimib) is in even more trouble than it looked like, and that was already a lot. The company announced earlier this morning that its Phase III trial comparing the drug to Gleevec (imatinib) is not just on hold - it's been stopped, and patients are being taken off the drug. That can't be good news for the drug's current approved status, either:
Iclusig is commercially available in the U.S. and EU for patients with resistant or intolerant CML and Philadelphia-chromosome positive acute lymphoblastic leukemia. ARIAD continues to work with health authorities to make appropriate changes to the Iclusig product labeling to reflect the recently announced safety findings from the pivotal PACE trial that was the basis of its marketing approvals.
If the approval trial has now shown such unfavorable safety, is approval still warranted at all? That's what investors are wondering, and I would imagine that the oncologists who would be prescribing Inclusig are wondering the same thing. This is bad news for everyone. There are patients who very much need a drug like this for resistant CML, and Ariad (needless to say) needs to be selling it. I believe that the company is putting up a new building, not far from where I work, and you have to wonder if there are some clauses in the contract that are going to need to be invoked. Do sudden adverse events with your main commercial product count as force majeure?
+ TrackBacks (0) | Category: Cancer | Regulatory Affairs | Toxicology
October 9, 2013
Just a note, in case any investors didn't realize it: no, drugs (and a drug companies) are not out of the woods after a compound has been approved and is on the market. Take a look at what's happening to Ariad and their BCR-ABL compound Iclusig (ponatinib). This is used to treat patients that have become resistant to Gleevec, and it's a very big deal for both those patients and for Ariad as a company.
But the percentage of patients on the drug showing serious complications from blood clots has been rising, and that's prompted a number of moves: enrollment in further clinical trials is on hold, dosages are being lowered for current patients, and the product's label is being changed to add warnings of cardiovascular effects. If you're wondering how this affects Ariad as a whole, well, the stock is down 66% in premarket trading as I write. . .
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
September 25, 2013
That didn't take long. Just a few days after Roger Perlmutter at Merck had praised the team that developed Bridon (sugammadex), the FDA turned it down for the second time. The FDA seems to be worried about hypersensitivity reactions to the drug - that was the grounds on which they rejected it in 2008. Merck ran another study to address this, but the agency apparently is now concerned about how that trial was run. What we know, according to FiercePharma, is that they "needed to assess an inspection of a clinical trial site conducting the hypersensitivity study". Frustratingly for Merck, their application was approved in the EU back in that 2008 submission period.
It's an odd compound, and it had a nomination in the "Ugliest Drug Candidate" competition I had here a while back. That's because it works by a very unusual mechanism. It's there to reverse the effects of rocuronium, a neuromuscular blockade agent used in anaesthesia. Sugammadex is a cyclodextrin derivative, a big cyclic polysaccharide of the sort that have been used to encapsulate many compounds in their central cavities. It's the mechanism behind the odor-controlling Febreze spray - interestingly, I've read that when that product was introduced, its original formulation failed in the market because it had no scent of its own, and consumers weren't ready for something with no smell that nonetheless decreased other odors). The illustration is from the Wikipedia article on sugammadex, and it shows very well how it's designed to bind rocuronium tightly in a way that it can no longer at at the acetylcholine receptor. Hats off to the Organon folks in Scotland who thought of this - pity that all of them must be long gone, isn't it?
You see, this is one of the drugs from Schering-Plough that Merck took up when they bought the company, but it was one of the compounds from Organon that Schering-Plough took up when they bought them. (How much patent life can this thing have left by now?) By the way, does anyone still remember the ridiculous setup by which Schering-Plough was supposed to be taking over Merck? Did all that maneuvering accomplish anything at all in the end? At any rate, Merck really doesn't seem to have gotten a lot out of the deal, and this latest rejection doesn't make it look any better. Not all of those problems were (or could have been) evident at the time, but enough of them were to make a person wonder. I'm willing to nominate it as "Most Pointless Big Pharma Merger", and would be glad to hear the case for other contenders.
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials | Pharmacokinetics | Regulatory Affairs | Toxicology
June 24, 2013
Well, as you can see from the graphic, my blast against the "Eight Toxic Foods" stuff picked up a lot of attention over the weekend, which I'm glad to see. A lot of this came from it being handed around Facebook, but Fark, Reddit, Popular Science's website and others all brought in plenty of traffic as well.
I've had a lot of requests for more articles like that one, but they'll be an occasional feature around here. There's certainly enough material to fill a blog that whacks away at things like the original BuzzFeed piece, and there are quite a few bloggers who've made that their turf. I don't really want to make it my daily diet, though - for one thing, there is just so much craziness out there that you start to wonder - rather quickly - if you'll ever see the end of it. I'm not sure if I can stand reading it day after day, either, just as I'm not sure that I could go on day after day writing about things that drive me crazy. But I definitely plan to keep on taking shots every so often at prominent stuff that mangles chemistry and/or drug research as part of its argument. In this latest case, it was the roaring success of the BuzzFeed piece coupled with its chirpy, confident, and bizarrely wrong takes on chemistry and toxicology that set me off.
I spent the weekend, by the way, being called a paid shill for Monsanto, DuPont, and all the other evil monied interests. It made a refreshing change from being called a paid shill for Big Pharma. Going straight to that accusation, by the way (or using it as if that's all that needs to be done) does not say a lot for the people who advance it. There's not much persuasive force behind "I don't like this, therefore the only reason anyone could be advocating it is that they've been paid to do so". What's also interesting is how some of these people act as if this is some newly discovered counterattack, that no one in the history of argument has ever thought of accusing an opponent of bad faith. What else has someone like this not come across, or not bothered to notice?
There's also a strain of Manicheanism running through a lot of the more worked-up responses: Good vs. Evil, 100% one way or or 100% the other. If I don't think that potassium bromate in flour is that big a deal, then I must think that chemical waste drums should be poured into lakes. If I don't think that 2 ppb arsenic in chicken is killing us, then I must want to feed spoonfuls of the pure stuff to infants. And so on.
Not so. As it turns out, the flour we use at home for baking (King Arthur) is not bromated, although I didn't pick it for that reason. Not being a professional baker, I doubt if I could notice a difference one way or another due to the bromate. And while (true to my Arkansas roots) I do drink a Mountain Dew every so often, I really do think that drinking gallons of the stuff day after day would be a very bad idea. The brominated vegetable oil would not be the first of your worries, but (as the medical literature shows) it could indeed catch up with some people.
There is such a thing as overloading the body's clearance mechanisms (as any medicinal chemist is well aware), and that level is different with every substance. Some things get blasted out of the body so quickly by the liver and the kidneys that you never even notice them, even at rather high doses. Others (acetaminophen is the classic example) are cleared out well under normal conditions, but can be real trouble if the usual mechanism is impaired by something else. And others (such as some radioactive isotopes, say) are actively accumulated in the body as well as being cleared from it, and therefore can have extremely low tolerance levels indeed. Every case is different; every case needs its own data and its own decision.
I am planning a follow-up post, though, based on one of the reasonable counterarguments that's come up: why are some of these ingredients banned in other countries? What reasons are behind those regulatory decisions, and why did the FDA come to different conclusions? That's worth going into details about, and I will.
+ TrackBacks (0) | Category: Snake Oil | Toxicology
June 21, 2013
Update: You'll notice in this post that I refer to some sites that the original BuzzFeed article I'm complaining out sends people to, often pointing out that these didn't actually support the wilder claims it's making. Well, the folks at BuzzFeed have dealt with this by taking down the links (!) The article now says: "Some studies linked in the original version of this article were concerning unrelated issues. They have been replaced with information directly from the book Rich Food, Poor Food". But as you'll see below, the studies weren't unrelated at all. So when you read about links to the American Cancer Association or NPR, well, all I can say is that they used to be there, until someone apparently realized how embarrassing they were.
Many people who read this blog are chemists. Those who aren't often come from other branch of the sciences, and if they don't, it's safe to say that they're at least interested in science (or they probably don't hang around very long!) It's difficult, if you live and work in this sort of environment, to keep in mind what people are willing to believe about chemistry.
But that's what we have the internet for. Many science-oriented bloggers have taken on what's been called "chemophobia", and they've done some great work tearing into some some really uninformed stuff out there. But nonsense does not obey any conservation law. It keeps on coming. It's always been in long supply, and it looks like it always will be.
That doesn't mean that we just have to sit back and let it wash over us, though. I've been sent this link in the last few days, a popular item on BuzzFeed with the BuzzFeedy headline of "Eight Foods That We Eat in The US That Are Banned in Other Countries". When I saw that title, I found it unpromising. In a world that eats everything that can't get away fast enough, what possible foods could we have all to ourselves here in the States? A quick glance was enough: we're not talking about foods here - we're talking about (brace yourselves) chemicals.
This piece really is an education. Not about food, or about chemistry - on the contrary, reading it for those purposes will make you noticeably less intelligent than you were before, and consider that a fair warning. The educational part is in the "What a fool believes" category. Make no mistake: on the evidence of this article, its author is indeed a fool, and has apparently never yet met a claim about chemicals or nutrition that was too idiotic to swallow. If BuzzFeed's statistics are to be believed (good question, there), a million views have already accumulated to this crap. Someone who knows some chemistry needs to make a start at pointing out the serial stupidities in it, and this time, I'm going to answer the call. So here goes, in order.
Number One: Artificial Dyes. Here's what the article has to say about 'em:
Artificial dyes are made from chemicals derived from PETROLEUM, which is also used to make gasoline, diesel fuel, asphalt, and TAR! Artificial dyes have been linked to brain cancer, nerve-cell deterioration, and hyperactivity, just to name a few.
Emphasis is in the original, of course. How could it not lapse into all-caps? In the pre-internet days, this sort of thing was written in green ink all around the margins of crumpled shutoff notices from the power company, but these days we have to make do with HTML. Let's take this one a sentence at a time.
It is true, in fact, that many artificial dyes are made from chemicals derived from petroleum. That, folks, is because everything (edible or not) is made out of chemicals, and an awful lot of man-made chemicals are derived from petroleum. It's one of the major chemical feedstocks of the world. So why stop at artificial dyes? The ink on the flyer from the natural-foods co-op is made from chemicals derived from petroleum. The wax coating the paper wrapped around that really good croissant at that little bakery you know about is derived from petroleum.
Now, it's true that more things you don't eat can be traced back to petroleum feedstocks than can things you do eat. That's because it's almost always cheaper to grow stuff than to synthesize it. Synthesized compounds, when they're used in food, are often things that are effective in small amounts, because they're so expensive. And so it is with artificial dyes - well, outside of red velvet cake, I guess. People see the bright colors in cake icing and sugary cereals and figure that the stuff must be glopped on like paint, but paint doesn't have very much dye or pigment in it, either (watch them mix it up down at the hardware store sometime).
And as for artificial colors causing "brain cancer, nerve-cell deterioration, and hyperactivity", well, these assertions range from "unproven" all the way down to "bullshit". Hyperactivity sensitivities to food dyes are an active area of research, but after decades of work, the situation is still unclear. And brain cancer? This seems to go back to studies in the 1980s with Blue #2, where rats were fed the dye over a long period in much larger concentrations (up to 2% of their total food intake) than even the most dedicated junk-food eater could encounter. Gliomas were seen in the male rats, but with no dose-response, and at levels consistent with historical controls in the particular rat strain. No one has ever been able to find any real-world connection. Note that glioma rates increased in the 1970s and 1980s as diagnostic imaging improved, but have fallen steadily since then. The age-adjusted incidence rates of almost all forms of cancer are falling, by the way, not that you'd know that from most of the coverage on the subject.
Number Two: Olestra
This, of course, is Proctor & Gamble's attempted non-calorific fat substitute. I'm not going to spend much time on this, because little or nothing is actually made with it any more. Olestra was a major flop for P&G; the only things (as far as I can tell) that still contain it are some fat-free potato chips. It does indeed interfere with the absorption of fat-soluble vitamins, but potato chips are not a very good source of vitamins to start with. And vitamin absorption can be messed with by all kinds of things, including other vitamins (folic acid supplements can interfere with B12 absorption, just to pick one). But I can agree with the plan of not eating the stuff: I think that if you're going to eat potato chips, eat a reasonable amount of the real ones.
Number Three: Brominated Vegetable Oil. Here's the article's take on it:
Bromine is a chemical used to stop CARPETS FROM CATCHING ON FIRE, so you can see why drinking it may not be the best idea. BVO is linked to major organ system damage, birth defects, growth problems, schizophrenia, and hearing loss.
Again with the caps. Now, if the author had known any chemistry, this would have looked a lot more impressive. Bromine isn't just used to keep carpets from catching on fire - bromine is a hideously toxic substance that will scar you with permanent chemical burns and whose vapors will destroy your lungs. Drinking bromine is not just a bad idea; drinking bromine is guaranteed agonizing death. There, see what a little knowledge will do for you?
But you know something? You can say the same thing for chlorine. After all, it's right next to bromine in the same column of the periodic table. And its use in World War I as a battlefield gas should be testimony enough. (They tried bromine, too, never fear). But chlorine is also the major part, by weight, of table salt. So which is it? Toxic death gas or universal table seasoning?
Knowledge again. It's both. Elemental chlorine (and elemental bromine) are very different things than their ions (chloride and bromide), and both of those are very different things again when either one is bonded to a carbon atom. That's chemistry for you in a nutshell, knowing these differences and understanding why they happen and how to use them.
Now that we've detoured around that mess, on to brominated vegetable oil. It's found in citrus-flavored sodas and sports drinks, at about 8 parts per million. The BuzzFeed article claims that it's linked to "major organ system damage, birth defects, growth problems, schizophrenia, and hearing loss", and sends readers to this WebMD article. But if you go there, you'll find that the only medical problems known from BVO come from two cases of people who had been consuming, over a long period, 4 to 8 liters of BVO-containing soda per day, and did indeed have reactions to all the excess bromine-containing compounds in their system. At 8 ppm, it's not easy to get to that point, but a determined lunatic will overcome such obstacles. Overall, drinking several liters of Mountain Dew per day is probably a bad idea, and not just because of the BVO content.
Number Four: Potassium Bromate. The article helpfully tells us this is "Derived from the same harmful chemical as brominated vegetable oil". But here we are again: bromate is different from bromide is different than bromine, and so on. If we're going to play the "made from the same atoms" game, well, strychnine and heroin are derived from the same harmful chemicals as the essential amino acids and B vitamins. Those harmful chemicals, in case you're wondering, are carbon, hydrogen, oxygen, and nitrogen. And to get into the BuzzFeed spirit of the thing, maybe I should mention that carbon is found in every single poisonous plant on earth, hydrogen is the harmful chemical that blew up the Hindenburg, oxygen is responsible for every death by fire around the world, and nitrogen will asphyxiate you if you try to breathe it (and is a key component of all military explosives). There, that wasn't hard - as Samuel Johnson said, a man might write such stuff forever, if only he would give over his mind to it.
Now, back to potassium bromate. The article says, "Only problem is, it’s linked to kidney damage, cancer, and nervous system damage". And you'll probably fall over when I say this, but that statement is largely correct. Sort of. But let's look at "linked to", because that's an important phrase here.
Potassium bromate was found (in a two-year rat study) to have a variety of bad effects. This occurred at the two highest doses, and the lowest observed adverse effect level (LOAEL) was 6.1 mg of bromate per kilo body weight per day. It's worth noting that a study in male mice took them up to nearly ten times that amount, though, with little or no effect, which gives you some idea of how hard it is to be a toxicologist. Whether humans are more like mice or more like rats in this situation is unknown.
I'm not going to do the whole allometric scaling thing here, because no matter how you do it, the numbers come out crazy. Bromate is used in some (but not all) bread flour at 15 to 30 parts per million, and if the bread is actually baked properly, there's none left in the finished product. But for illustration, let's have someone eating uncooked bread dough at the highest level, just to get the full bromate experience. A 75-kilo human (and many of us are more than that) would have to take in 457 mg of bromate per day to get to the first adverse level seen in rats, which would be. . .15 kilos (about 33 pounds) of bread dough per day, a level I can safely say is unlikely to be reached. Hell, eating 33 pounds of anything isn't going to work out, much as my fourteen-year-old son tries to prove me wrong. You'd need to keep that up for decades, too, since that two year study represents a significant amount of a rat's lifespan.
Number Five: Azodicarbonamide. This is another bread flour additive. According to the article, "Used to bleach both flour and FOAMED PLASTIC (yoga mats and the soles of sneakers), azodicarbonamide has been known to induce asthma".
Let's clear this one up quickly: azodicarbonamide is indeed used in bread dough, and allowed up the 45 parts per million. It is not stable to heat, though, and it falls apart quickly to another compound, biurea, on baking. It not used to "bleach foamed plastic", though. Actually, in higher concentrations, it's used to foam foamed plastics. I realize that this doesn't sound much better, but the conditions inside hot plastic, you will be glad to hear, are quite different from those inside warm bread dough. In that environment, azodicarbonamide doesn't react to make birurea - it turns into several gaseous products, which are what blow up the bubbles of the foam. This is not its purpose in bread dough - that's carbon dioxide from the yeast (or baking powder) that's doing the inflating there, and 45 parts per million would not inflate much of anything.
How about the asthma, though? If you look at the toxicology of azodicarbonamide, you find that "Azodicarbonamide is of low acute toxicity, but repeated or prolonged contact may cause asthma and skin sensitization." That, one should note, is for the pure chemical, not 45 parts per million in uncooked flour (much less zero parts per million in the final product). If you're handling drums of the stuff at the plastics plant, you should be wearing protective gear. If you're eating a roll, no.
Number Six: BHA and BHT. We're on the home stretch now, and this one is a two-fer. BHA and BHT are butylated hydroxyanisole and butylate hydroxytoluene, and according to the article, they are "known to cause cancer in rats. And we’re next!"
Well, of course we are! Whatever you say! But the cancer is taking its time. These compounds have been added to cereals, etc., for decades now, while the incidence rates of cancer have been going down. And what BuzzFeed doesn't mention is that while some studies have shown an increase in cancer in rodent models with these compounds, others have shown a measurable decrease. Both of these compounds are efficient free radical scavengers, and have actually been used in animal studies that attempt to unravel the effects of free radicals on aging and metabolism. Animal studies notwithstanding, attempts to correlate human exposure to these compounds with any types of cancer have always come up negative. Contrary to what the BuzzFeed article says, by the way, BHT is indeed approved by the EU.
Weirdly, you can buy BHT in some health food stores, where anti-aging and anti-viral claims are made for it. How does a health food store sell butylated hydroxytoluene with a straight face? Well, it's also known to be produced by plankton, so you can always refer to it as a natural product, if that makes you feel better. That doesn't do much for me - as an organic chemist, I know that the compounds found in plankton range from essential components of the human diet all the way down to some of the most toxic molecules found in nature.
Number Seven: Synthetic Growth Hormones. These are the ones given to cattle, not the ones athletes give to themselves. The article says that they can "give humans breast, colon, and prostate cancer", which, given what's actually known about these substances, is a wildly irresponsible claim.
The article sends you to a perfectly reasonable site at the American Cancer Society, which is the sort of link that might make a BuzzFeed reader think that it must then be about, well, what kinds of cancer these things give you. But have a look. What you find is (first off) this is not an issue for eating beef. Bovine growth hormone (BGH) is given to dairy cattle to increase milk production. OK, so what about drinking milk?
Here you go: for one, BGH levels in the milk of treated cows are not higher than in untreated ones. Secondly, BGH is not active as a growth hormone in humans - it's selective for the cow receptor, not the human one. The controversy in this area comes from the way that growth hormone treatment in cows tends to increase levels of another hormone, IGF-1, in the milk. That increase still seems to be within the natural range of variability for IGF-1 in regular cows, but there is a slight change.
The links between IGF-1 and cancer have indeed been the subject of a lot of work. Higher levels of circulating IGF-1 in the bloodstream have (in some studies) been linked to increased risk of cancer, but I should add that other studies have failed to find this effect, so it's still unclear what's going on. I can also add, from my own experiences in drug discovery, that all of the multiple attempts to treat cancer by blocking IGF-1 signaling have been complete failures, and that might also cause one to question the overall linkage a bit.
But does drinking milk from BGH-treated cows increase the levels of circulating IGF-1 at all? No head-to-head study has been run, but adults who drink milk in general seem to have slightly higher levels. The same effect, though, was seen in people who drink soymilk, which (needless to say) does not have recombinant cow hormones in it. No one knows to what extent ingested IGF-1 might be absorbed into the bloodstream - you'd expect it to be digested like any other protein, but exceptions are known.
But look at the numbers. According to that ACA web summary, even if the protein were not degraded at all, and if it were completely absorbed (both of which are extremely unrealistic top-of-the-range assumptions), and even if the person drinking it were an infant, and taking in 1.6 quarts a day of BGH-derived cow milk with the maximum elevated levels of IGF-1 that have been seen, the milk would still contribute less than 1% of the IGF-1 in the bloodstream compared to what's being made in the human body naturally.
Number Eight, Arsenic. Arsenic? It seems like an unlikely food additive, but the article says "Used as chicken feed to make meat appear pinker and fresher, arsenic is POISON, which will kill you if you ingest enough."
Ay. I think that first off, we should make clear that arsenic is not "used as chicken feed". That brings to mind someone pitching powdered arsenic out for the hens, and that's not part of any long-term chicken-farming plan. If you go to the very NPR link that the BuzzFeed article offers, you find that a compound called roxarsone is added to chicken feed to keep down Coccidia parasites in the gut. It is not just added for some cosmetic reason, as the silly wording above would have you believe.
In 2011, a study found that chicken meat with detectable levels of roxarsone had 2.3 parts per billion (note the "b") of inorganic arsenic, which is the kind that is truly toxic. Chicken meat with no detectable roxarsone had 0.8 ppb inorganic arsenic, threefold less, and the correlation seems to be real. (Half of the factory-raised chickens sampled had detectable roxarsone, by the way). This led to the compound being (voluntarily) withdrawn from the market, under the assumption that this is an avoidable exposure to arsenic that could be eliminated.
And so it is. There are other (non-arsenic) compounds that can be given to keep parasite infestations down in poultry, although they're not as effective, and they'll probably show up on the next edition of lists like this one. But let's get things on scale: it's worth comparing these arsenic levels to those found in other foods. White rice, for example comes in at about 100 parts per billion of inorganic arsenic (and brown rice at 170 ppb). These, by the way, are all-natural arsenic levels, produced by the plant's own uptake from the soil. But even those amounts are not expected to pose a human health risk (says both the FDA and Canadian authorities), so the fifty-fold lower concentrations in chicken would, one thinks, be even less to worry about. If you're having chicken and rice and you want to worry about arsenic, worry about the rice.
This brings me to the grand wrap-up, and some of the language in that last item is a good starting point for it. I'm talking about the "POISON, which will kill you if you ingest enough" part. This whole article is soaking in several assumptions about food, about chemistry, and about toxicology, and that's one of the big ones. In my experience, people who write things like this have divided the world into two categories: wholesome, natural, healthy stuff and toxic chemical poisons. But this is grievously simple-minded. As I've emphasized in passing above, there are plenty of natural substances, made by healthy creatures in beautiful, unpolluted environments, that will nonetheless kill you in agony. Plants, fungi, bacteria, and animals produce poisons, wide varieties of intricate poisons, and they're not doing it for fun.
And on the other side of the imaginary fence, there are plenty of man-made substances that really won't do much of anything to people at all. You cannot assume anything about the effects of a chemical compound based on whether it came from a lovely rainforest orchid or out of a crusty Erlenmeyer flask. The world is not set up that way. Here's a corollary to this: if I isolate a beneficial chemical compound from some natural source (vitamin C from oranges, for example, although sauerkraut would be a good source, too), that molecule is identical to a copy of it I make in my lab. There is no essence, no vital spirit. A compound is what it is, no matter where it came from.
Another assumption that seems common to this mindset is that when something is poisonous at some concentration, it is therefore poisonous at all concentrations. It has some poisonous character to it that cannot be expunged nor diluted. This, though, is more often false than true. Paracelsus was right: the dose makes the poison. You can illustrate that in both directions: a beneficial substance, taken to excess, can kill you. A poisonous one, taken in very small amounts, can be harmless. And you have cases like selenium, which is simultaneously an essential trace element in the human diet and an inarguable poison. It depends on the dose.
Finally, I want to return to something I was saying way back at the beginning of this piece. The author of the BuzzFeed article knows painfully little about chemistry and biology. But that apparently wasn't a barrier: righteous conviction (and the worldview mentioned in the above three paragraphs) are enough, right? Wrong. Ten minutes of unbiased reading would have served to poke holes all through most of the article's main points. I've spent more than ten minutes (as you can probably tell), and there's hardly one stone left standing on another. As a scientist, I find sloppiness at this level not only stupid, not only time-wasting, but downright offensive. Couldn't anyone be bothered to look anything up? There are facts in this world, you know. Learn a few.
+ TrackBacks (0) | Category: Current Events | Snake Oil | Toxicology
May 29, 2013
You'd think that by now we'd know all there is to know about the side effects of sulfa drugs, wouldn't you? These were the top-flight antibiotics about 80 years ago, remember, and they've been in use (in one form or another) ever since. But some people have had pronounced CNS side effects from their use, and it's never been clear why.
Until now, that is. Here's a new paper in Science that shows that this class of drugs inhibits the synthesis of tetrahydrobiopterin, an essential cofactor for a number of hydroxylase and reductase enzymes. And that in turn interferes with neurotransmitter levels, specifically dopamine and serotonin. The specific culprit here seems to be sepiapterin reductase (SPR). Here's a summary at C&E News.
This just goes to show you how much there is to know, even about things that have been around forever (by drug industry standards). And every time something like this comes up, I wonder what else there is that we haven't uncovered yet. . .
+ TrackBacks (0) | Category: Infectious Diseases | Toxicology
May 6, 2013
Here's the latest "medical periodic table", courtesy of this useful review in Chemical Communications. Element symbols in white are known to be essential in man. The ones with a blue background are found in the structures of known drugs, the orange ones are used in diagnostics, and the green ones are medically useful radioisotopes. (The paper notes that titanium and tantalum are colored blue due to their use in implants).
I'm trying to figure out a couple of these. Xenon I've heard of as a diagnostic (hyperpolarized and used in MRI of lung capacity), but argon? (The supplementary material for the paper says that argon plasms has been used locally to control bleeding in the GI tract). And aren't there marketed drugs with a bromine atom in them somewhere? At any rate, the greyed-out elements end up that way through four routes, I think. Some of them (francium, and other high-atomic-number examples) are just too unstable (and thus impossible to obtain) for anything useful to be done with them. Others (uranium) are radioactive, but have not found a use that other radioisotopes haven't filled already. Then you have the "radioactive but toxic) category, the poster child of which is plutonium. (That said, I'm pretty sure that popular reports of its toxicity are exaggerated, but it still ain't vanilla pudding). Then you have the nonradioactive but toxic crowd - cadmium, mercury, beryllium and so on. (There's another question - aren't topical mercury-based antiseptics still used in some parts of the world? And if tantalum gets on the list for metal implants, what about mercury amalgam tooth fillings?) Finally, you have elements that are neither hot not poisonous, but that no one has been able to find any medical use for (scandium, niobium, hafnium). Scandium and beryllium, in fact, are my nominees for "lowest atomic-numbered elements that many people have never heard of", and because of nonsparking beryllium wrenches and the like, I think scandium might win out. I've never found a use for it myself, either. I have used a beryllium-copper wrench (they're not cheap) in a hydrogenation room.
The review goes on to detail the various classes of metal-containing drugs, most prominent of them being, naturally, the platinum anticancer agents. There are ruthenium complexes in the clinic in oncology, and some work has been done with osmium and iridium compounds. Ferrocenyl compounds have been tried several times over the years, often put in place of a phenyl ring, but none of them (as far as I know) have made it into the general pharmacopeia. What I didn't know what that titanocene dichloride has been into the clinic (but with disappointing results). And arsenic compounds have a long (though narrow) history in medicinal chemistry, but have recently made something of a comeback. The thioredoxin pathway seems to be a good fit for exotic elements - there's a gadolinium compound in development, and probably a dozen other metals have shown activity of one kind or another, both in oncology and against things like malaria parasites.
Many of these targets, though, are in sort of a "weirdo metal" category in the minds of most medicinal chemists, and that might not reflect reality very well. There's no reason why metal complexes wouldn't be able to inhibit more traditional drug targets as well, but that brings up another concern. For example, there have been several reports of rhodium, iridium, ruthenium, and osmium compounds as kinase inhibitors, but I've never quite been able to see the point of them, since you can generally get some sort of kinase inhibitor profile without getting that exotic. But what about the targets where we don't have a lot of chemical matter - protein/protein interactions, for example? Who's to say that metal-containing compounds wouldn't work there? But I doubt if that's been investigated to any extent at all - not many companies have such things in their compound collections, and it still might turn out to be a wild metallic goose chase to even look. No one knows, and I wonder how long it might be before anyone finds out.
In general, I don't think anyone has a feel for how such compounds behave in PK and tox. Actually "in general" might not even be an applicable term, since the number and types of metal complexes are so numerous. Generalization would probably be dangerous, even if our base of knowledge weren't so sparse, which sends you right back into the case-by-case wilderness. That's why a metal-containing compound, at almost any biopharma company, would be met with the sort of raised eyebrow that Mr. Spock used to give Captain Kirk. What shots these things have at becoming drugs will be in nothing-else-works areas (like oncology, or perhaps gram-negative antibiotics), or against exotic mechanisms in other diseases. And that second category, as mentioned above, will be hard to get off the ground, since almost no one tests such compounds, and you don't find what you don't test.
+ TrackBacks (0) | Category: Cancer | Odd Elements in Drugs | Toxicology
April 30, 2013
I've had a few people send along this article, on the possible toxicological effects of the herbicide glyphosate, wondering what I make of it as a medicinal chemist. It's getting a lot of play in some venues, particularly the news-from-Mother-Nature outlets. After spending some time reading this paper over, and looking through the literature, I've come to a conclusion: it is, unfortunately, a load of crap.
The authors believe that glyphosate is responsible for pretty much every chronic illness in humans, and a list of such is recited several times during the course of the long, rambling manuscript. Their thesis is that the compound is an inhibitor of the metabolizing CYP enzymes, of the biosynthesis of aromatic amino acids by gut bacteria, and of sulfate transport. But the evidence given for these assertions, and their connection with disease, while it might look alarming and convincing to someone who has never done research or read a scientific paper, is a spiderweb of "might", "could", "is possibly", "associated with", and so on. The minute you look at the actual evidence, things disappear.
Here's an example - let's go right to the central thesis that glyphosate inhibits CYP enzymes in the liver. Here's a quote from the paper itself:
A study conducted in 1998 demonstrated that glyphosate inhibits cytochrome P450 enzymes in plants . CYP71s are a class of CYP enzymes which play a role in detoxification of benzene compounds. An inhibitory effect on CYP71B1l extracted from the plant, Thlaspi arvensae, was demonstrated through an experiment involving a reconstituted system containing E. coli bacterial membranes expressing a fusion protein of CYP71B fused with a cytochrome P450 reductase. The fusion protein was assayed for activity level in hydrolyzing a benzo(a)pyrene, in the presence of various concentrations of glyphosate. At 15 microM concentration of glyphosate, enzyme activity was reduced by a factor of four, and by 35 microM concentration enzyme activity was completely eliminated. The mechanism of inhibition involved binding of the nitrogen group in glyphosate to the haem pocket in the enzyme.
A more compelling study demonstrating an effect in mammals as well as in plants involved giving rats glyphosate intragastrically for two weeks . A decrease in the hepatic level of cytochrome P450 activity was observed. As we will see later, CYP enzymes play many important roles in the liver. It is plausible that glyphosate could serve as a source for carcinogenic nitrosamine exposure in humans, leading to hepatic carcinoma. N-nitrosylation of glyphosate occurs in soils treated with sodium nitrite , and plant uptake of the nitrosylated product has been demonstrated . Preneoplastic and neoplastic lesions in the liver of female Wistar rats exposed to carcinogenic nitrosamines showed reduced levels of several CYP enzymes involved with detoxification of xenobiotics, including NADPH-cytochrome P450 reductase and various glutathione transferases . Hence this becomes a plausible mechanism by which glyphosate might reduce the bioavailability of CYP enzymes in the liver.
Glyphosate is an organophosphate. Inhibition of CYP enzyme activity in human hepatic cells is a well-established property of organophosphates commonly used as pesticides . In , it was demonstrated that organophosphates upregulate the nuclear receptor, constitutive androstane receptor (CAR), a key regulator of CYP activity. This resulted in increased synthesis of CYP2 mRNA, which they proposed may be a compensation for inhibition of CYP enzyme activity by the toxin. CYP2 plays an important role in detoxifying xenobiotics .
Now, that presumably sounds extremely detailed and impressive if you don't know any toxicology. What you wouldn't know from reading through all of it is that their reference 121 actually tested glyphosate against human CYP enzymes. In fact, you wouldn't know that anyone has ever actually done such an experiment, because all the evidence adduced in the paper is indirect - this species does that, so humans might do this, and this might be that, because this other thing over here has been shown that it could be something else. But the direct evidence is available, and is not cited - in fact, it's explicitly ignored. Reference 121 showed that glyphosate was inactive against all human CYP isoforms except 2C9, where it had in IC50 of 3.7 micromolar. You would also not know from this new paper that there is no way that ingested glyphosate could possibly reach levels in humans to inhibit CYP2C9 at that potency.
I'm not going to spend more time demolishing every point this way; this one is representative. This paper is a tissue of assertions and allegations, a tendentious brief for the prosecution that never should have been published in such a form in any scientific journal. Ah, but it's published in the online journal Entropy, from the MDPI people. And what on earth does this subject have to do with entropy, you may well ask? The authors managed to work that into the abstract, saying that glyphosate's alleged effects are an example of "exogenous semiotic entropy". And what the hell is that, you may well ask? Why, it's a made-up phrase making its first appearance, that's what it is.
But really, all you need to know is that MDPI is the same family of "journals" that published the (in)famous Andrulis "Gyres are the key to everything!" paper. And then made all kinds of implausible noises about layers of peer review afterwards. No, this is one of the real problems with sleazy "open-access" journals. They give the whole idea of open-access publishing a black eye, and they open the floodgates to whatever ridiculous crap comes in, which then gets "peer reviewed" and "published" in an "actual scientific journal", where it can fool the credulous and mislead the uninformed.
+ TrackBacks (0) | Category: The Scientific Literature | Toxicology
March 22, 2013
The FDA has been turning its attention to some potential problems with therapies that target the incretin pathways. That includes the DPP-IV inhibitors, such as Januvia (sitagliptin) and GLP-1 peptide drugs like Byetta and Victoza.
There had been reports (and FDA mentions) of elevated risks with GLP-1 drugs, but this latest concern is prompted by a recent paper in JAMA Internal Medicine that uses insurance company data to nail down the effect. Interestingly, the Endocrine Society has come out with a not-so-fast press release of its own, expressing doubts about the statistics of the new paper. I'm not quite sure why they're taking that side of the issue, but there it is.
For what it's worth, this looks to me like one of those low-but-real incidence effects, with consequences that are serious enough to make physicians (and patients) think twice. At the very least, you'd expect diabetic patients on these drugs to stay very alert to early signs of pancreatitis (which is really one of the last things you need to experience, and in fact, may be one of the last things you experience should the case arise). And this just points out how hard the diabetes field really is - there are already major cardiovascular concerns that have to be checked out with any new drug, and now we have pancreatitis cropping up with one of the large mechanistic classes. In general, diabetic patients can have a great deal wrong with their metabolic functions, and they have to take your drugs forever. While that last part might sound appealing from a business point of view, you're also giving every kind of trouble all the time it needs to appear. Worth thinking about. . .
+ TrackBacks (0) | Category: Diabetes and Obesity | Toxicology
March 19, 2013
Affymax has had a long history, and it's rarely been dull. The company was founded in 1988, back in the very earliest flush of the Combichem era, and in its early years it (along with Pharmacopeia) was what people thought of when they thought of that whole approach. Huge compound libraries produced (as much as possible) by robotics, equally huge screening efforts to deal with all those compounds - this stuff is familiar to us now (all too familiar, in many cases), but it was new then. If you weren't around for it, you'll have to take the word of those who were that it could all be rather exciting and scary at first: what if the answer really was to crank out huge piles of amides, sulfonamides, substituted piperazines, aminotriazines, oligopeptides, and all the other "build-that-compound-count-now!" classes? No one could say for sure that it wasn't. Not yet.
Glaxo bought Affymax back in 1995, about the time they were buying Wellcome, which makes it seem like a long time ago, and perhaps it was. At any rate, they kept the combichem/screening technology and spun a new version of Affymax back out in 2001 to a syndicate of investors. For the past twelve years, that Affymax has been in the drug discovery and development business on its own.
And as this page shows, the story through most of those years has been peginesatide (brand name Omontys, although it was known as Hematide for a while as well). This is synthetic peptide (with some unnatural amino acids in it, and a polyethylene glycol tail) that mimics erythropoetin. What with its cyclic nature (a couple of disulfide bonds), the unnatural residues, and the PEGylation, it's a perfect example of what you often have to do to make an oligopeptide into a drug.
But for quite a while there, no one was sure whether this one was going to be a drug or not. Affymax had partnered with Takeda along the way, and in 2010 the companies announced some disturbing clinical data in kidney patients. While Omontys did seem to help with anemia, it also seemed to have a worse safety profile than Amgen's EPO, the existing competition. The big worry was cardiovascular trouble (which had also been a problem with EPO itself and all the other attempted competition in that field). A period of wranging ensued, with a lot of work on the clinical data and a lot of back-and-forthing with the FDA. In the end, the drug was actually approved one year ago, albeit with a black-box warning about cardiovascular safety.
But over the last year, about 25,000 patients got the drug, and unfortunately, 19 of them had serious anaphylactic reactions to it within the first half hour of exposure. Three patients died as a result, and some others nearly did. That is also exactly what one worries about with a synthetic peptide derivative: it's close enough to the real protein to do its job, but it's different enough to set off the occasional immune response, and the immune system can be very serious business indeed. Allergic responses had been noted in the clinical trials, but I think that if you'd taken bets last March, people would have picked the cardiovascular effects as the likely nemesis, not anaphylaxis. But that's not how it's worked out.
Takeda and Affymax voluntarily recalled the drug last month. And that looked like it might be all for the company, because this has been their main chance for some years now. Sure enough, the announcement has come that most of the employees are being let go. And it includes this language, which is the financial correlate of Cheyne-Stokes breathing:
The company also announced that it will retain a bank to evaluate strategic alternatives for the organization, including the sale of the company or its assets, or a corporate merger. The company is considering all possible alternatives, including further restructuring activities, wind-down of operations or even bankruptcy proceedings.
I'm sorry to hear it. Drug development is very hard indeed.
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Development | Drug Industry History | Toxicology
March 14, 2013
I agree with something Chemjobber said about this case - there's clearly a lot more to it than we know. Last fall, a student at the University of Southampton in the UK was poisoned with arsenic and thallium. According to this article in Chemistry World, it was more than the usual lethal dose, and both accidental exposure and suicide have been ruled out. The student himself is making a slow recovery; I wish him the best, and hope to eventually report good news.
So, not an accident, and not suicide. . .well, that doesn't leave much but intentional poisoning, does it? As this post details, though, thallium is the murder weapon of idiots who think that they're being high-tech. The Dunning-Kruger effect in action, in other words.
+ TrackBacks (0) | Category: The Dark Side | Toxicology
March 11, 2013
The great majority of prescriptions in this country are for generic drugs. And generic drugs are cheaper in the US than they are in Europe or many other areas - they're a large and important part of health care. And as time goes on, and more and more medicines move into that category, that importance, you'd think, can only increase.
So a case that's coming before the Supreme Court later this month could have some major effects. It concerns a woman in New Hampshire, Karen Bartlett, who was prescribed sulindac, one of the non-steroidal anti-inflammatory drugs that have been around for decades. All the NSAIDs (and other drugs besides) carry a small risk of Stevens-Johnson Syndrome, which is an immune system complication that ranges from mild (erythema multiforme) to very severe (toxic epidermal necrolysis, and I'm not linking anyone to pictures of that). Most unfortunately, Mrs. Bartlett came down with severe TEN, which has left her permanently injured. She spent months in burn units and under intensive medical care.
But now we come to the question that always comes up in modern life: whose fault is this? She sued that generic drug company (Mutual Pharmaceutical), and won a $21 million dollar judgment, which was upheld on appeal. But the Supreme Court has agreed to hear the next level of appeal, and a lot of people are going to be watching this one very closely. Mutual's defense is that the original manufacturer of the drug (Merck) and the FDA are responsible for these sorts of things (if anyone is), and that they are merely making (under regulatory permission) a drug that others discovered and that others have regulated over the decades.
A case with some similarities came before the court in 2010, Pliva v. Mensing. That one, though, turned on the labeling language, and how much control a generic company had over the label warnings. "Not much", said the court, which limited patients' ability to sue on those grounds. That seems proper, but, as that New York Times article shows, it also has the perverse effect of giving people more potential recourse if they take a drug as made by the original manufacturer as opposed to the exact same substance as made by a generic company, which doesn't make much sense.
This latest case does not argue label warnings; it argues that the drug itself is defective. Now, it does not seem fair that a generic company should have to pay for the bad effects of a drug it did not discover, did not take through the clinic, and did not reap the benefits of during its patent lifetime (when any bad effects in the real world should have become clear). On the other hand, there are problems with going the other way and sending all lawsuits back to the original developers of the drug. After all, sulindac has been on the market since the early 1980s, under the regulatory authority of the FDA, which could have pulled it from the market at any time and has not. The agency has also authorized several generic manufacturers to produce it since that time. From a regulatory standpoint, how defective can it be? Allowing the originating company to be sued for all the generic versions, after such a long interval, would seem to open up a "find the deep pockets" strategy for everyone who comes along. (And as that older post argues, if this is made the law of the land, it will add to the costs of current drugs, whose prices will surely then be adjusted to deal with decades of future liability concerns).
And if I had to guess, I would think that the Supreme Court is going to find a way out of coming down firmly on one side of the issue or the other. A 2008 decision, Riegel v. Medtronic, said that medical device makers were, in some cases, shielded from state-level tort claims because of regulatory pre-emption. (But note that this isn't always the case; nothing is always the case in law, which is so close to a perpetual motion machine that you start to wonder about the laws of thermodynamics). But an earlier attempt to use these arguments in a pharmaceutical case (Wyeth v. Levine) got no traction at all in the court. But to avoid having either of those outcomes in the paragraph above, I still think that the justices are going to find some way to make this more of a federal regulatory pre-emption case, and to distinguish it from Wyeth v. Levine.
And if that happens, it will mean what for Karen Bartlett? Well, it would mean that she has no recourse. Something terrible has happened to her, but terrible things happen sometimes. That's a rather cold way of looking at it, and I would probably not be disposed to look at it that way were it me, or a member of my family. But that might end up being the right call. We'll see.
Update: as detailed over at Pharmalot, the Obama administration has reversed course on this issue, and is now directing the Solicitor General to argue in favor of federal pre-emption in this case. But two former FDA commissioners (David Kessler and Donald Kennedy) have filed briefs in support of Bartlett, arguing that to assume pre-emption would be to assume too much ability of the FDA to police all these issues on its own (without the threat of lawsuits to keep manufacturers on their toes). So there's a lot of arguing to be done here. . .
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
February 15, 2013
Abbott - whoops, pardon me, I mean AbbVie, damn that name - has been developing ABT-199, a selective Bcl-2-targeted oncology compound for CLL. Unlike some earlier shots in this area (ABT-263, navitoclax), it appeared to spare platelet function, and was considered a promising drug candidate in the mid-stage clinical pipeline.
Not any more, perhaps. Clinical work has been suspended after a patient death due to tumor lysis syndrome. This is a group of effects caused by sudden breakdown of the excess cells associated with leukemia. You get too much potassium, too much calcium, too much uric acid, all sorts of things at once, which lead to many nasty downstream events, among them irreversible kidney damage and death. So yes, this can be caused by a drug candidate working too well and too suddenly.
The problem is, as the Biotech Strategy Blog says in that link above, that this would be more understandable in some sort of acute leukemia, as opposed to CLL, which is the form that ABT-199 is being tested against. So there's going to be some difficulty figuring out how to proceed. My guess is that they'll be able to restart testing, but that they'll be creeping up on the dosages, with a lot of blood monitoring along the way, until they get a better handle on this problem - if a better handle is available, that is. ABT-199 looks too promising to abandon, and after all, we're talking about a fatal disease. But this is going to slow things down, for sure.
Update: I've had email from the company, clarifying things a bit: "While AbbVie has voluntarily suspended enrollment in Phase 1 trials evaluating ABT-199 as a single agent and in combination with other agents such as rituximab, dosing of active patients in ABT-199 trials is continuing. Previous and current trials have shown that dose escalation methods can control tumor lysis syndrome and we have every expectation that the trials will come off of clinical hold and that we will be able to initiate Phase 3 trials in 2013, as planned."
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Toxicology
January 23, 2013
Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:
A question has been bugging me that I hope you might answer.
My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.
My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?
His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?
My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.
And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.
(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.
Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.
(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.
And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.
How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.
So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.
Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology
December 21, 2012
Merck's Tredaptive (formerly Cordaptive) has had a long and troubled history. It's a combination of niacin and Laropiprant, which is there to try to reduce the cardiovascular (flushing) side effects of large niacin doses, which otherwise seem to do a good job improving lipid profiles. (Mind you, we don't seem to know how that works, and there's a lot of reason to wonder how well it works in combination with statins, but still).
The combination was rejected by the FDA back in 2008, but approved in Europe. Merck has been trying to shore up the drug ever since, and since the FDA told them that they would not approve without more data, the company has been running a 25,000-patient trial (oh, cardiovascular disease. . .) combining Tredaptive with statin therapy. In light of the last link in the paragraph above, one might have wondered how that was going to work out, since the NIH had to stop a large niacin-plus-statin study of their own. Well. . .
The European Medicines Agency has started a review of the safety and efficacy of Tredaptive, Pelzont and Trevaclyn, identical medicines that are used to treat adults with dyslipidaemia (abnormally high levels of fat in the blood), particularly combined mixed dyslipidaemia and primary hypercholesterolaemia.
The review was triggered because the Agency was informed by the pharmaceutical company Merck, Sharp & Dohme of the preliminary results of a large, long-term study comparing the clinical effects of adding these medicines to statins (standard medicines used to reduce cholesterol) with statin treatment alone. The study raises questions about the efficacy of the medicine when added to statins, as this did not reduce the risk of major vascular events (serious problems with the heart and blood vessels, including heart attack and stroke) compared with statin therapy alone. In addition, in the preliminary results a higher frequency of non-fatal but serious side effects was seen in patients taking the medicines than in patients only taking statins.
So much for Tredaptive, and (I'd say) so much for the idea of taking niacin and statins together. And it also looks like the FDA was on target here when they asked for more evidence from Merck. Human lipid biology, as we get reminded over and over, is very complicated indeed. The statin drugs, for all their faults, do seem to be effective, but (to repeat myself!) they also seem, more and more, to be outliers in that regard.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
November 2, 2012
That title should bring in the hits. But don't get your hopes up! This is medicinal chemistry, after all.
"Can't you just put the group in your molecule that does such-and-such?" Medicinal chemists sometimes hear variations of that question from people outside of chemistry - hopeful sorts who believe that we might have some effective and instantly applicable techniques for fixing selectivity, brain penetration, toxicity, and all those other properties we're always trying to align.
Mostly, though, we just have general guidelines - not so big, not so greasy (maybe not so polar, either, depending on what you're after), and avoid a few of the weirder functional groups. After that, it's art and science and hard work. A recent J. Med. Chem. paper illustrates just that point - the authors are looking at the phenomenon of molecular promiscuity. That shows up sometimes when one compound is reasonably selective, but a seemingly closely related one hits several other targets. Is there any way to predict this sort of thing?
"Probably not", is the answer. The authors looked at a range of matched molecular pairs (MMPs), structures that were mostly identical but varied only in one region. Their data set is list of compounds in this paper from the Broad Institute, which I blogged about here. There are over 15,000 compounds from three sources - vendors, natural product collections, and Schreiber-style diversity-oriented synthesis. The MMPs are things like chloro-for-methoxy on an aryl ring, or thiophene-for-pyridyl with other substituents the same. That is, they're just the sort of combinations that show up when medicinal chemists work out a series of analogs.
The Broad data set yielded 30954 matched pairs, involving over 8000 compounds and over seven thousand different transformations. Comparing these compounds and their reported selectivity over 100 different targets (also in the original paper), showed that most of these behaved "normally" - over half of them were active against the same targets that their partners were active against. But at the other end of the scale, 829 compounds showed different activity over at least ten targets, and 126 of those compounds different in activity by fifty targets or more. 33 of them differed by over ninety targets! So there really are some sudden changes out there waiting to be tripped over; they're not frequent, but they're dramatic.
How about correlations between these "promiscuity cliff" compounds and physical properties, such as molecular weight, logP, donor/acceptor count, and so on? I'd have guessed that a change to higher logP would have accompanied this sort of thing over a broad data set, but the matched pairs don't really show that (nor a shift in molecular weight). On the other hand, most of the highly promiscuous compounds are in the high cLogP range, which is reassuring from the standpoint of Received Med-Chem Wisdom. There are still plenty of selective high-logP compounds, but the ones that hit dozens of targets are almost invariably logP > 6.
Structurally, though, no particular substructure (or transformation of substructures) was found to be associated with sudden onset of promiscuity, so to this approximation, there's no actionable "avoid sticking this thing on" rule to be drawn. (Note that this does not, to me at least, say that there are no such things are frequent-hitting structures - we're talking about changes within some larger structure, not the hits you'd get when screening 500 small rhodanine phenols or the like). In fact, I don't think the Broad data set even included many functional groups of that sort to start with.
On the basis of the data available to us, it is not possible to conclude with certainty to what extent highly promiscuous compounds engage in specific and/or nonspecific interactions with targets. It is of course unlikely that a compound might form specific interactions with 90 or more diverse targets, even if the interactions were clearly detectable under the given experimental conditions. . .
. . .it has remained largely unclear from a medicinal chemistry perspective thus far whether certain molecular frameworks carry an intrinsic likelihood of promiscuity and/or might have frequent hitter character. After all, promiscuity is determined for compounds, not their frameworks. Importantly, the findings presented herein do not promote a framework-centric view of promiscuity. Thus, for the evaluation and prioritization of compound series for medicinal chemistry, frameworks should not primarily be considered as an intrinsic source of promiscuity and potential lack of compound specificity. Rather, we demonstrate that small chemical modifications can trigger large-magnitude promiscuity effects. Importantly, these effects depend on the specific structural environment in which these modifications occur. On the basis of our analysis, substitutions that induce promiscuity in any structural environment were not identified. Thus, in medicinal chemistry, it is important to evaluate promiscuity for individual compounds in series that are preferred from an SAR perspective; observed specificity of certain analogs within a series does not guarantee that others are not highly promiscuous."
Point taken. I continue to think, though, that some structures should trigger those evaluations with more urgency than others, although it's important never to take anything for granted with molecules you really care about.
+ TrackBacks (0) | Category: Chemical News | Drug Assays | Natural Products | Toxicology
September 6, 2012
This may sound a little odd coming from someone in the drug industry, but I have a lot of sympathy for the FDA. I'm not saying that I always agree with them, or that I think that they're doing exactly what we need them to do all the time. But I would hate to be the person that would have to decide how they should do things differently. And I think that no matter what, the agency is going to have a lot of people with reasons to complain.
These thoughts are prompted by this article in JAMA on whether or not drug safety is being compromised by the growing number of "Priority Review" drug approvals. There are three examples set out in detail: Caprelsa (vandetanib) for thyroid cancer, Gilenya (fingolimod) for multiple sclerosis, and the anticoagulant Pradaxa (dabigatran). In each of these accelerated cases, safety has turned out to be more of a concern than some people expected, and the authors of this paper are asking if the benefits have been worth the risks.
Pharmalot has a good summary of the paper, along with a reply from the FDA. Their position is that various forms of accelerated approval have been around for quite a few years now, and that the agency is committed to post-approval monitoring in these cases. What they don't say - but it is, I think, true - is that there is no way to have accelerated approvals without occasional compromises in drug safety. Can't be done. You have to try to balance these things on a drug-by-drug basis: how much the new medication might benefit people without other good options, versus how many people it might hurt instead. And those are very hard calls, which are made with less data than you would have under non-accelerated conditions. If these three examples are indeed problematic drugs that made it through the system, no one should be surprised at all. Given the number of accelerated reviews over the years, there have to be some like this. In fact, this goes to show you that the accelerated review process is not, in fact, a sham. If everything that passed through it turned out to be just as clean as things that went through the normal approval process, that would be convincing evidence that the whole thing was just window dressing.
If that's true - and as I said, I certainly believe it is - then the question is "Should there be such a thing as accelerated approval at all?" If you decide that the answer to that is "Yes", then the follow-up is "Is the risk-reward slider set to the right place, or are we letting a few too many things through?" This is the point the authors are making, I'd say, that the answer to that question is "Yes", and we need to move the settings back a bit. But here comes an even trickier question: if you do that, how far back do you go before the whole accelerated approval process is not worth the effort any more? (If you try to make it so that nothing problematic makes it through at all, you've certainly crossed into that territory, to my way of thinking). So if three recent examples like these represent an unacceptable number (and it may be), what is acceptable? Two? One? Those numbers, but over a longer period of time?
And if so, how are you going to do that without tugging on the other end of the process, helping patients who are waiting for new medications? No, these are very, very hard questions, and no matter how you answer them, someone will be angry with you. I have, as I say, a lot of sympathy for the FDA.
+ TrackBacks (0) | Category: Drug Development | Regulatory Affairs | Toxicology
June 29, 2012
Has there ever been a less structurally appealing class of drugs than the cholesteryl ester transfer protein (CETP) inhibitors? Just look at that bunch. From left to right, that's Pfizer's torcetrapib (which famously was the first to crash and burn back in 2006), Roche's dalcetrapib (which was pulled earlier this year from the clinic, a contributing factor to the company's huge recent site closure), Merck's anacetrapib (which is forging on in Phase III), Lilly's evacetrapib (which when last heard from was also on track to go into Phase III), and a compound from Bristol-Myers Squibb, recently published, which must be at least close to their clinical candidate BMS-795311.
Man, is that ever an ugly-looking group of compounds. They look like fire retardants, or something you'd put in marine paint formulations to keep barnacles from sticking to the hull. Every one of them is wildly hydrophobic, most are heavy on aromatic rings, and on what other occasion did you ever see nine or ten fluorines on one drug molecule? But, as you would figure, this is what the binding site of CETP likes, and this is what the combined medicinal chemistry talents of some of the biggest drug companies in the world have been driven to. You can be sure that they didn't like it, but the nice-looking compounds don't inhibit CETP.
Will any of these fancy fluorocarbon nanoparticles make it through to the market, just on properties/idiosyncratic toxicity concerns alone? How do their inhibitory mechanisms differ, and what will that mean? Is inhibiting CETP even a good idea in the first place, or are we finding out yet more fascinating details about human lipoprotein handling? Money is being spent, even as you read this, to find out. And how.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
June 25, 2012
Here's another reminder that we don't know what a lot of existing drugs are doing on the side. This paper reports that the kinase inhibitor Nexavar (sorafenib) is actually a pretty good ligand at 5-HT (serotinergic) receptors, which is not something that you'd have guessed at all.
The authors worked up a binding model for the 5-HT2a receptor and ran through lists of known drugs. Sorafenib was flagged, and was (experimentally) a 2 micromolar antagonist. As it turns out, though, it's an even strong ligand for 5-HT2b (57 nM!) and 5-HT2c (417 nM), with weaker activity on a few other subtypes. This makes a person wonder about the other amine GPCR receptors, since there's often some cross-reactivity with small molecule ligands. (Those, though, often have good basic tertiary amines in them, carrying a positive charge under in vivo conditions. Sorafenib lacks any such thing, so it'll be interesting to see the results of further testing). It's also worth wondering if these serotinergic activities help or hurt the drug in oncology indications. In case you're wondering, the compound does get into the brain, although it's significantly effluxed by the BCRP transporter.
What I also find interesting is that this doesn't seem to have been picked up by some of the recent reports on attempts to predict and data-mine potential side effects. We still have a lot to learn, in case anyone had any doubts.
+ TrackBacks (0) | Category: Cancer | Drug Assays | The Central Nervous System | Toxicology
June 13, 2012
I wanted to highlight a couple of recent examples from the literature to show what happens (all too often) when you start to optimize med-chem compounds. The earlier phases of a project tend to drive on potency and selectivity, and the usual way to get these things is to add more stuff to your structures. Then as you start to produce compounds that make it past those important cutoffs, your focus turns more to pharmacokinetics and metabolism, and sometimes you find you've made your life rather difficult. It's an old trap, and a well-known one, but that doesn't stop people from sticking a leg into it.
Take a look at these two structures from ACS Chemical Biology. The starting structure is a pretty generic-looking kinase inhibitor, and as the graphic to its left shows, it does indeed hit a whole slew of kinases. These authors extended the structure out to another loop of the their desired target, c-Src, and as you can see, they now have a much more selective compound.
But at such a price! Four more aromatic rings, including the dread biphenyl, and only one sp3 carbon in the lot. The compound now tips the scales at MW 555, and looks about as soluble as the Chrysler building. To be fair, this is an academic group, which mean that they're presumably after a tool compound. That's a phrase that's used to excuse a lot of sins, but in this case they do have cellular assay data, which means that despite this compound's properties, it's managing to do something. Update: see this comment from the author on this very point. Be warned, though, if you're in drug discovery and you follow this strategy. Adding four flat rings and running up the molecular weight might work for you, but most of the time it will only lead to trouble - pharmacokinetics, metabolic clearance, toxicity, formulation.
My second example is from a drug discovery group (Janssen). They report work on a series of gamma-secretase modulators (GSMs) for Alzheimer's. You can tell from the paper that they had quite a wild ride with these things - for one, the activity in their mouse model didn't seem to correlate at all with the concentration of the compounds in the brain. Looking at those structures, though, you have to think that trouble is lurking, and so it is.
"In all chemical classes, the high potency was accompanied by high lipophilicity (in general, cLogP >5) and a TPSA [topological polar surface area] below 75 Å, explaining the good brain penetration. However, the majority of compounds also suffered from hERG binding with IC50s below 1 μM, CyP inhibition and low solubility, particularly at pH = 7.4 (data not shown). These unfavorable ADME properties can likely be attributed to the combination of high lipophilicity and low TPSA.
That they can. By the time they got to that compound 44, some of these problems had been solved (hERG, CyP). But it's still a very hard-to-dose compound (they seem to have gone with a pretty aggressive suspension formulation) and it's still a greasy brick, despite its impressive in vivo activity. And that's my point. Working this way exposes you to one thing after another. Making greasy bricks often leads to potent in vitro assay numbers, but they're harder to get going in vivo. And if you get them to work in the animals, you often face PK and metabolic problems. And if you manage to work your way around those, you run a much higher risk of nonspecific toxicity. So guess what happened here? You have to go to the very end of the paper to find out:
As many of the GSMs described to date, the series detailed in this paper, including 44a, is suffering from suboptimal physicochemical properties: low solubility, high lipophilicity, and high aromaticity. For 44a, this has translated into signs of liver toxicity after dosing in dog at 20 mg/kg. Further optimization of the drug-like properties of this series is ongoing.
Back to the drawing board, in other words. I wish them luck, but I wonder how much of this structure is going to have to be ripped up and redone in order to get something cleaner?
+ TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | Pharmacokinetics | Toxicology
June 12, 2012
One of the major worries during a clinical trial is toxicity, naturally. There are thousands of reasons a compound might cause problem, and you can be sure that we don't have a good handle on most of them. We screen for what we know about (such as hERG channels for cardiovascular trouble), and we watch closely for signs of everything else. But when slow-building low-incidence toxicity takes your compound out late in the clinic, it's always very painful indeed.
Anything that helps to clarify that part of the business is big news, and potentially worth a lot. But advanced in clinical toxicology come on very slowly, because the only thing worse than not knowing what you'll find is thinking that you know and being wrong. A new paper in Nature highlights just this problem. The authors have a structural-similarity algorithm to try to test new compounds against known toxicities in previously tested compounds, which (as you can imagine) is an approach that's been tried in many different forms over the years. So how does this one fare?
To test their computational approach, Lounkine et al. used it to estimate the binding affinities of a comprehensive set of 656 approved drugs for 73 biological targets. They identified 1,644 possible drug–target interactions, of which 403 were already recorded in ChEMBL, a publicly available database of biologically active compounds. However, because the authors had used this database as a training set for their model, these predictions were not really indicative of the model's effectiveness, and so were not considered further.
A further 348 of the remaining 1,241 predictions were found in other databases (which the authors hadn't used as training sets), leaving 893 predictions, 694 of which were then tested experimentally. The authors found that 151 of these predicted drug–target interactions were genuine. So, of the 1,241 predictions not in ChEMBL, 499 were true; 543 were false; and 199 remain to be tested. Many of the newly discovered drug–target interactions would not have been predicted using conventional computational methods that calculate the strength of drug–target binding interactions based on the structures of the ligand and of the target's binding site.
Now, some of their predictions have turned out to be surprising and accurate. Their technique identified chlorotrianisene, for example, as a COX-1 inhibitor, and tests show that it seems to be, which wasn't known at all. The classic antihistamine diphenhydramine turns out to be active at the serotonin transporter. It's also interesting to see what known drugs light up the side effect assays the worst. Looking at their figures, it would seem that the topical antiseptic chlorhexidine (a membrane disruptor) is active all over the place. Another guanidine-containing compound, tegaserod, is also high up the list. Other promiscuous compounds are the old antipsychotic fluspirilene and the semisynthetic antibiotic rifaximin. (That last one illustrates one of the problems with this approach, which the authors take care to point out: toxicity depends on exposure. The dose makes the poison, and all that. Rifaximin is very poorly absorbed, and it would take very unusual dosing, like with a power drill, to get it to hit targets in a place like the central nervous system, even if this technique flags them).
The biggest problem with this whole approach is also highlighted by the authors, to their credit. You can see from those figures above that about half of the potentially toxic interactions it finds aren't real, and you can be sure that there are plenty of false negatives, too. So this is nowhere near ready to replace real-world testing; nothing is. But where it could be useful is in pointing out things to test with real-world assays, activities that you probably hadn't considered at all.
But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too. That split, "stupid delay versus crucial red flag", is at the heart of clinical toxicology, and is the reason it's so hard to make solid progress in this area. So much is riding on these decisions: you could walk away from a compound, never developing one that would go on to clear billions of dollars and help untold numbers of patients. Or you could green-light something that would go on to chew up hundreds of millions of dollars of development costs (and even more in opportunity costs, considering what you could have been working on instead), or even worse, one that makes it onto the market and has to be withdrawn in a blizzard of lawsuits. It brings on a cautious attitude.
+ TrackBacks (0) | Category: Drug Development | In Silico | Toxicology
May 16, 2012
How much do we really know about what small drug molecules do when they get into cells? Everyone involved in this sort of research wonders about this question, especially when it comes to toxicology. There's a new paper out in PLoS One that will cause you to think even harder.
The researchers (from Princeton) looked at the effects of the antidepressant sertraline, a serotonin reuptake inhibitor. They did a careful study in yeast cells on its effects, and that may have some of you raising your eyebrows already. That's because yeast doesn't even have a serotonin transporter. In a perfect pharmacological world, sertraline would do nothing at all in this system.
We don't live in that world. The group found that the drug does enter yeast cells, mostly by diffusion, with a bit of acceleration due to proton motive force and some reverse transport by efflux pumps. (This is worth considering in light of those discussions we were having here the other day about transport into cells). At equilibrium, most (85 to 90%) of the sertaline that makes it into a yeast cell is stuck to various membranes, mostly ones involved in vesicle formation, either through electrostatic forces or buried in the lipid bilayer. It's not setting off any receptors - there aren't any - so what happens when it's just hanging around in there?
More than you'd think, apparently. There's enough drug in there to make some of the membranes curve abnormally, which triggers a local autophagic response. (The paper has electron micrographs of funny-looking Golgi membranes and other organelles). This apparently accounts for the odd fact, noticed several years ago, that some serotonin reuptake inhibitors have antifungal activity. This probably applies to the whole class of cationic amphiphilic/amphipathic drug structures.
The big question is what happens in mammalian cells at normal doses of such compounds. These may well not be enough to cause membrane trouble, but there's already evidence to the contrary. A second big question is: does this effect account for some of the actual neurological effects of these drugs? And a third one is, how many other compounds are doing something similar? The more you look, the more you find. . .
+ TrackBacks (0) | Category: Drug Assays | Pharmacokinetics | The Central Nervous System | Toxicology
April 4, 2012
Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.
He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:
First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”
The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.
Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).
One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):
Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.
But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:
It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.
Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.
At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.
So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.
What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.
The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?
It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.
I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".
Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?
All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.
So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology
March 15, 2012
Here's a study that suggests that there are a lot more drug-drug interactions than we've ever recognized. (If you don't have access to Science Translational Medicine, here's a summary from Nature News).
Postmarketing surveillance yields huge piles of data that could potentially be mined for such, but it's a messy and heterogeneous pile. This study tries to correct for some of the confounding variables, by attempting to match each patient with a non-treated control patient with as many similarities as possible. They do look to have fished some useful correlations out that no one had ever observed before. For example, selective serotonin reuptake inhibitors (SSRIs) given with thiazide diuretics seem to be associated with a notably greater risk of the cardiac side effect of QT prolongation, which is a new one.
And while that's good news, you can't help but bury your head in your hands for a least a bit. It turns out that the average number of side effects listed on a full drug label is 69. That might seem to be quite enough, thanks, but this study suggests that about 329 different adverse effects per drug might be more accurate.
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
February 29, 2012
You'll have seen the news about the FDA safety warning on statins. The agency is warning that instances of hyperglycemia have occurred with statin use, as well as memory loss and confusion.
I'm really not sure what to make of this. On the one hand, these drugs have been through many, many large clinical trials under controlled conditions, and they've been taken by a huge numbers of patients out in the real world. So you might think that if these effects were robust, that they might have been noticed before now. But there are side effects that are below the threshold of even the largest clinical trials, and a patient population the size of the one taking these drugs is just where you might be able to see such things.
I lean towards the latter, and if that's true, then the agency's statement is appropriate. If these could be real effects in some patients, then it's worth keeping an eye out for them. One problem, though, is that hyperglycemia is rather more sturdy. You can measure it, and people don't really feel it when they have it. Memory loss and confusion are fuzzier, but they're immediately felt, so they're subject to more post hoc ergo propter hoc judgments. It's possible that more people will stop taking statins because of that part of the warning to cancel out the public health good that it might do otherwise.
+ TrackBacks (0) | Category: Cardiovascular Disease | Regulatory Affairs | Toxicology
January 19, 2012
To no one's surprise, the FDA has rejected dapagliflozin, an SGLT2 inhibitor for diabetes. The advisory panel voted it down back during the summer, and the agency has asked AstraZeneca and Bristol-Myers Squibb to provide more safety data. As it stands, the increased risk of bladder and breast cancer (small but significant) that was seen in the clinic just outweighs the drug's benefits.
That's the sodium-glucose cotransporter 2, and what it does normally is reabsorb glucose in the kidney to keep it from going on into the urine and being lost. It's been the subject of quite a bit of drug development over the last few years, with the thought being that spilling glucose out of the bloodstream, as an adjunct to other diabetes therapy, might be more of a feature than a bug.
Not with that safety profile, though. And since this compound has been through nearly a dozen different advanced trials in the clinic, I really don't see how anyone's going to be able to provide any safety data at this point to change anyone's mind about it. Type II diabetes is an area with a lot of treatment options, and while all of them have their advantages and disadvantages, taken together, there's quite a bit than can be done. So if you're going to enter a crowded field like this, a new mechanism is a good idea (thus SGLT2). But you're also up against a lot of things that have proven themselves in the real world, some of them for a long time now, so your safety profile has to be above reproach.
Canagliflozin, from J&J, is still out there in the clinic, and you can bet that the folks there will be digging through the data from every direction. Are dapagliflozin's problems mechanism-related, or not? Would you care to spend nine figures to find out? That's how we do it around here. . .
+ TrackBacks (0) | Category: Diabetes and Obesity | Toxicology
January 6, 2012
Some of the discussions that come up here around clinical attrition rates and compound properties prompts me to see how much we can agree on. So, are these propositions controversial, or not?
1. Too many drugs fail in clinical trials. We are having a great deal of trouble going on with these failure rates, given the expense involved.
2. A significant number of these failures are due to lack of efficacy - either none at all, or not enough.
2a. Fixing efficacy failures is hard, since it seems to require deeper knowledge, case-by-case, of disease mechanisms. As it stands, we get a significant amount of this knowledge from our drug failures themselves.
2b. Better target selection without such detailed knowledge is hard to come by. Good phenotypic assays are perhaps the only shortcut, but a good phenotypic assays are not easy to develop and validate.
3. Outside of efficacy, a significant number of clinical failures are also due to side effects/toxicity. These two factors (efficacy and tox) account for the great majority of compounds that drop out of the clinic.
3a. Fixing tox/side effect failures through detailed knowledge is perhaps hardest of all, since there are a huge number of possible mechanisms. There are far more ways for things to go wrong than there are for them to work correctly.
3b. But there are broad correlations between molecular structures and properties and the likelihood of toxicity. While not infallible, these correlations are strong enough to be useful, and we should be grateful for anything we can get that might diminish the possibility of later failure.
Example of such structural features are redox-active groups like nitros and quinones, which really are associated with trouble - not invariably, but enough to make you very cautious. More broadly, high logP values are also associated with trouble in development - not as strongly, but strong enough to be worth considering.
So, is everyone pretty much in agreement with these things? What I'm saying is that if you take a hundred aryl nitro compounds into development, versus a hundred that don't have such a group, the latter cohort of compounds will surely have a higher success rate. And if you take a hundred compounds with logP values of 1 to 3 into development, these will have a higher success rate than a hundred compounds, against the same targets, with logP of 4 to 6. Do we believe this, or not?
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Toxicology
November 18, 2011
Remember torcetrapib? Pfizer always will. The late Phase III failure of that CETP inhibitor wiped out their chances for an even bigger HDL-raising follow-up to LDL-lowering Lipitor, the world's biggest drug, and changed the future of the company in ways that are still being played out.
But CETP inhibition still makes sense, biochemically. And the market for increasing HDL levels is just as huge as it ever was, since there's still no good way to do it. Merck is pressing ahead with anacetrapib, Roche with dalcetrapib, and Lilly is out with recent data on evacetrapib. All three companies have tried to learn as much as they could from Pfizer's disaster, and are keeping a close eye on the best guesses for why it happened (a small rise in blood pressure and changes in aldosterone levels). So far, so good - but that only takes you so far. Those toxicological changes are reasonable, but they're only hypotheses for why torcetrapib showed a higher death rate in the drug treatment group than it did in the controls. And even that only takes you up to the big questions.
Which are: will raising HDL really make a difference in cardiovascular morbidity and mortality? And if so, is inhibiting CETP the right way to do it? Human lipidology is not nearly as well worked out as some people might think it is, and these are both still very open questions. But such drugs, and such trials, are the only way that we're going to find out the answers. All three companies are risking hundreds of millions of dollars (in an area that's already had one catastrophe) in an effort to find out, and (to be sure) in the hope of making billions of dollars if they're correct.
Will anyone make it through? Will they fail for tox like Pfizer did, telling us that we don't understand CETP inhibitors? Or will they make it past that problem, but not help patients as much as expected, telling us that we don't understand CETP itself, or HDL? Or will all three work as hoped, and arrive in time to split up the market ferociously, making none of them as profitable as the companies might have wanted? If you want to see what big-time drug development is like, I can't think of a better field to illustrate it.
+ TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Toxicology
October 18, 2011
Under the "Who'da thought?" category, put this news about cyclodextrin. For those outside the field, that's a ring of glucose molecules, strung end to end like a necklace. (Three-dimensionally, it's a lot more like a thick-cut onion ring - see that link for a picture). The most common form, beta-cyclodextrin, has seven glucoses. That structure gives it some interesting properties - the polar hydroxy groups are mostly around the edges and outside surface, while the inside is more friendly to less water-soluble molecules. It's a longtime additive in drug formulations for just that purpose - there are many, many examples known of molecules that fit into the middle of a cyclodextrin in aqueous solution.
But as this story at the Wall Street Journal shows, it's not inert. A group studying possible therapies for Niemann-Pick C disease (a defect in cholesterol storage and handling) was going about this the usual way - one group of animals was getting the proposed therapy, while the other was just getting the drug vehicle. But this time, the vehicle group showed equivalent improvement to the drug-treatment group.
Now, most of the time that happens when neither of them worked; that'll give you equivalence all right. But in this case, both groups showed real improvement. Further study showed that the cyclodextrin derivative used in the dosing vehicle was the active agent. And that's doubly surprising, since one of the big effects seen was on cholesterol accumulation in the central neurons of the rodents. It's hard to imagine that a molecule as big (and as polar-surfaced) as cyclodextrin could cross into the brain, but it's also hard to see how you could have these effects without that happening. It's still an open question - see that PLoS One paper link for a series of hypotheses. One way or another, this will provide a lot of leads and new understanding in this field:
Although the means by which CD exerts its beneficial effects in NPC disease are not understood, the outcome of CD treatment is clearly remarkable. It leads to delay in onset of clinical signs, a significant increase in lifespan, a reduction in cholesterol and ganglioside accumulation in neurons, reduced neurodegeneration, and normalization of markers for both autophagy and neuro-inflammation. Understanding the mechanism of action for CD will not only provide key insights into the cholesterol and GSL dysregulatory events in NPC disease and related disorders, but may also lead to a better understanding of homeostatic regulation of these molecules within normal neurons. Furthermore, elucidating the role of CD in amelioration of NPC disease will likely assist in development of new therapeutic options for this and other fatal lysosomal disorders.
Meanwhile, the key role of cholesterol in the envelope of HIV has led to the use of cyclodextrin as a possible antiretroviral. This looks like a very fortunate intersection of a wide-ranging, important biomolecule (cholesterol) with a widely studied, well-tolerated complexing agent for it (cyclodextrin). It'll be fun to watch how all this plays out. . .
+ TrackBacks (0) | Category: Biological News | Infectious Diseases | The Central Nervous System | Toxicology
October 12, 2011
siRNA technology has famously been the subject of a huge amount of work (and a huge amount of hype) and, more recently, a huge amount of uncertainty. Now a new report will add to that last pile. A group at the University of Kentucky says that they've identified a toxic effect in the retina for a wide range of siRNAs, one that seems to be triggered independent of sequence:
"We now show a new undesirable effect of siRNAs that are 21 nucleotides or longer in length: these siRNAs, regardless of their sequence or target, can cause retinal toxicity. By activating a new immune pathway consisting of the molecules TLR3 and IRF3, these siRNAs damage a critical layer of the retina called the retinal pigmented epithelium (RPE). Damage to the RPE cells by siRNAs can also lead to secondary damage to the rods and cones, which are light-sensing cells in the retina. . ."
That's especially worrisome news, since several siRNA efforts have targeted eye diseases in particular. The eye is a privileged compartment, metabolically, and exotica like small RNA molecules have a better chance of surviving there. But if you're trying to help out with macular degeneration or diabetic retinopathy, affecting the retinal epithelium isn't what you need, is it?
As a side note, this effect seems to be mediated, in part, by TLR3. Its family, the toll-like receptors, were part of this year's Nobel in Physiology/Medicine.
+ TrackBacks (0) | Category: Toxicology
September 26, 2011
Predicting toxic drug effects in humans - now, that's something we could use more of. Plenty of otherwise viable clinical candidates go down because of unexpected tox, sometimes in terribly expensive and time-wasting ways. But predictive toxicology has proven extremely hard to realize, and it's not hard to see why: there must be a million things that can go wrong, and how many of them have we even heard of? And of the ones we have some clue about, how many of them do we have tests for?
According to Science, the folks at DARPA are soliciting proposals for another crack at the idea. The plan is to grow a variety of human cell lines in small, three-dimensional cultures, all on the same chip or platform, and test drug candidates across them. Here are the details. In keeping with many other DARPA initiatives, the goals are rather ambitious:
DARPA is soliciting innovative research proposals to develop an in vitro platform of engineered tissue constructs that reproduces the interactions that drugs or vaccines have with human physiological systems. The tissue constructs must be of human origin and engineered in such a way as to reproduce the functions of specific organs and physiological systems. All of the following physiological systems must be functionally represented on the platform by the end of the program: circulatory, endocrine, gastrointestinal, immune, integumentary, musculoskeletal, nervous, reproductive, respiratory, and urinary.
The request goes on to specify that these cell cultures need to be able to interact with each other in a physiologically relevant manner, that distribution and membrane barrier effects should be taken into account and reproduced as much as possible, and that the goal is to have a system that can run for up to four weeks during a given test. And they're asking for the right kinds of validation:
Proposers should present a detailed plan for validating integrated platform performance. At the end of each period of performance, performers are expected to estimate the efficacy, toxicity, and pharmacokinetics of one or more drugs/vaccines that have already been administered to humans. Proposers should choose test compounds from each of the four categories listed below based on published clinical studies. These choices should also be relevant to the physiological systems resident on the platform at the time of testing and should include at least one test compound that was thought to be safe on the basis of preclinical testing but later found to be toxic in humans.
i. Drugs/vaccines known to be safe and effective
ii. Drugs/vaccines known to be safe and ineffective
iii. Drugs/vaccines known to be unsafe, but effective
iv. Drugs/vaccines known to be unsafe and ineffective
Now, that project is going to keep some people off the streets and out of trouble, for sure. It's a serious engineering challenge, right off the bat, and there are a lot of very tricky questions to get past even once you've got those issue worked out. One of the biggest is which cells to use. You can't just say "Well, some kidney cells, sure, and some liver, yeah, can't do without those, and then some. . ." That's not how it works. Primary cells from tissue can just die off on you when you try to culture them like this, and if they survive, they (almost invariably) lose many of the features that made them special in their native environment. Immortalized cell lines are a lot more robust, but they've been altered a lot more, too, and can't really be taken as representative of real tissue, either. One possibility that's gotten a lot of attention is the use of induced stem cell lines, and I'd bet that a lot of the DARPA proposals will be in this area.
So, let's stipulate that it's possible - that's not a small assumption, but it's not completely out of the question. How large a test set would be appropriate before anyone puts such a system to serious use? Honestly, I'd recommend pretty much the entire pharmacopeia. Why not? Putting in things that are known to be trouble is a key step, but it's just as crucial that we know the tendency of such an assay to kill compounds that should actually get through. Given our failure rates, we don't need to lose any more drug candidates without a good reason.
We're not going to have to worry about that for a while, though. DARPA is asking for people to submit proposals for up to five years of funding, contingent on milestones, and I still cannot imagine that anyone will be able to get the whole thing working in that short a period. And I think that there's still no way that any system like this will catch everything, of course (and no one seems to be promising that, fortunately). A system sufficient to do that would be like building your own in vitro human, which is a bit out of our reach. No, I'd definitely settle for just an improved look into possible tox problems - every little bit will definitely help - but only if it doesn't set off too many false alarms in the process.
+ TrackBacks (0) | Category: Toxicology
July 26, 2011
Is there something going on with patients in Alzheimer's trials that we didn't expect? There have been reports of an unexpected side effect (vasogenic edema) in several trials, for drugs that work through completely different mechanisms.
It makes some sense in the case of antibody-based therapies like bapineuzumab (where this problem first got attention) and solanezumab. After all, the immune system is pretty powerful stuff, and you could certainly imagine these sorts of side effects (either directly or from some effect of clearing out amyloid debris). As those reports indicate, the problem may lessen with time, and may be more severe in patients with the APOE4 allele, a known (but not understood) risk factor for Alzheimer's.
But this latest report is for the Bristol-Myers Squibb gamma-secretase inhibitor avagacestat (BMS-708163). That shouldn't be involved with any inflammatory/immune mechanisms, nor, really, with amyloid clearance. A secretase inhibitor should just keep new amyloid from being formed and deposited, which should be beneficial if the beta-amyloid theory of Alzheimer's is correct, which is what we're all still in the middle of deciding these days. Expensively and excruciatingly deciding.
Meanwhile, the most recent big clinical failure in this area continues to reverberate. Lilly's gamma-secretase inhibitor semagacestat, the first that went deep into the clinic, imploded when the company found that patients in the treatment group were deteriorating faster than those in the control group. Seven months on, they're still worse. What does this mean for the BMS compound targeting the same mechanism? That is the big, important, unanswerable question - well, unanswerable except by taking the thing deep into big clinical trials, which is what BMS is still committed to doing.
For more on Alzheimer's drug development - and it hasn't been pretty - scroll back in this category.
+ TrackBacks (0) | Category: Alzheimer's Disease | Toxicology
June 24, 2011
There are plenty of headlines about the recent Supreme Court decision (PDF) on suing generic drug manufacturers. But this is not so much about generic drugs, or suing people, as it is about the boundaries between state and federal law. That, actually, is why the case made this far - that's just the sort of issue the Supreme Court is supposed to untangle. Readers may decide for themselves whether such distentangling has actually occurred.
Reglan (metoclopramide) is the drug involved here. It's been generic for many years, and for many years it's also been known to be associated with a severe CNS side effect, tardive dyskinesia. This is the same involuntary-movement condition brought on by many earlier antipsychotic medications, and it's bad news indeed. The labeling for the product has been revised several times by the FDA over the years.
In this case, the plaintiffs were prescribed metoclopramide in 2001 and 2002, and their claim was that the generic manufacturers are at fault under state tort law (in these cases, Minnesota and Louisiana). It should be noted at this point that the package insert for the drug warned at the time that tardive dyskinesia could develop, and that treatment for more than 12 weeks had not been evaluated. In 2004 and 2009, the labe was strengthened to warn that treatment beyond twelve weeks should only be undertaken in rare cases. The plaintiffs both took metoclopramide for years, although this was not at issue in this case as it was brought.
What's at issue is the drug label and how it's regulated. The plaintiffs claimed that state law required a stronger safety warning than did federal law at the time, and that they thus have standing to sue. On the other hand, you have the whole process of generic drug approval. A generic company has to show that its product is equivalent to the original drug, and it then uses the exact same label information. Under federal law, the generic companies claim, they have no authority to independently change the labeling of their products.
The plaintiffs (and their lawyers) countered this argument by claiming that there were still mechanisms ( the CBE (changes-being-effected) process and "Dear Doctor" letters) by which the manufacturers could have changed the safety warnings on their own. The FDA, however, disputes that, and the Supreme Court deferred to the agency, saying that this is not an obviously mistaken position and there is no reason to doubt that it represents the FDA's best judgment in the matter.
That disposed of, the question comes back to federal law versus state. And in direct conflicts of that sort, state law has to yield, according to Justice Thomas for the majority:
The Court finds impossibility here. If the Manufacturers had independently changed their labels to satisfy their state-law duty to attach a safer label to their generic metoclopramide, they would have violated the federal requirement that generic drug labels be the same as the corresponding brand-name drug labels. Thus, it was impossible for them to comply with both state and federal law. And even if they had fulfilled their federal duty to ask for FDA help in strengthening the corresponding brand-name label, assuming such a duty exists, they would not have satisfied their state tort-law duty. State law demanded a safer label; it did not require communication with the FDA about the possibility of a safer label.
And that last sentence is where Justice Sotomayor's dissent breaks in. The minority holds that the generic manufacturers only showed that they might have been unable to comply with both federal and state requirements, and that this isn't enough for an impossibility defense. Sotomayor's dissent agrees, though, that the FDA does not allow the generic companies to unilaterally change their labels. But she says that this does not mean that they just have to sit there. Instead of just making sure that their labels match the brand-name labeling, she says, they likely have a responsibility to ask the FDA to consider label changes when necessary, and this wasn't done in this case. And even if you take the position that they don't have to do so, they still can do so, making the impossibility defense invalid.
This is explicitly addressed in the majority opinion - saying, in so many words, that this is a fair argument, but that they reject it. On what grounds? That it would actually
". . .render conflict pre-emption largely meaningless because it would make most conflicts between state and federal law illusory. We can often imagine that a third party or the Federal Government might do something that makes it lawful for a private party to accomplish under federal law what state law requires of it. In these cases, it is certainly possible that, had the Manufacturers asked the FDA for help, they might have eventually been able to strengthen their warning label. Of course, it is also possible that the Manufacturers could have convinced the FDA to reinterpret its regulations in a manner that would have opened the CBE process to them. Following Mensing and Demahy’s argument to its logical conclusion, it is also possible that, by asking, the Manufacturers could have persuaded the FDA to rewrite its generic drug regulations entirely or talked Congress into amending the Hatch-Waxman Amendments."
The "supremacy clause" in the Constitution, the majority says, clearly treats pre-emption conflicts as real problems, and therefore any line of argument that just makes them go away is therefore invalid. At about this point in the majority opinion, Justice Kennedy bails out, though. Thomas and the remaining three justices have a point to make about non obstante provisions that he does not join in - and since this is not exactly a legal blog, nor am I a lawyer (although that's easier for me to forget on morning like this one), I'm going to bypass this part of the dispute).
For those of you who are still with me, there's one more feature of interest in this case. Metoclopramide has already been the subject of an important lawsuit - in this case, going back to Wyeth, the original brand manufacturer. That's Conte v. Wyeth, which I wrote about here. The dispute in that case was not about labeling, it was over who was liable for the tardive dyskinesia in the first place. A court in California held that the originator of the drug was on the hook for that, no matter how long the compound had been generic, and the California Supreme Court refused to hear an appeal. That issue is not yet laid to rest, though, and we'll be hearing about it again.
Given these cases, though, let's say that someone takes metoclopramide and is affected by tardive dyskinesia. Who can they sue? Well, the way the labeling is now, if you take it for more than a few weeks, you're doing so at your own risk, and in the face of explicit warnings not to do so. If your physician told you to do so, you could presumably sue for malpractice.
And what about the whole labeling dispute? Well, the language of the majority decision, it seems to me, is basically a message to the FDA and the legislative branch. If you don't like this decision, it says, if it doesn't seem to make any sense, well, you have the power to do something about it. We've shown you what the law says now, and you know where to start working on it if you want it to say something else.
One more point: on the train in to work this morning, I heard the argument advanced that because of these cases, once a drug goes generic, that brand-name manufacturers will probably want to consider just exiting the market in the case of drugs that have significant warnings in their labels. That will put the whole pharmacovigilence burden on the generic companies - which they won't like, but someone's going to have to soak it up. We'll see if that happens. . .
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
March 25, 2011
The Supreme Court came down with a decision the other day (Matrixx Initiatives v. Siracusano) that the headlines say will have an impact on the drug industry. Looking at it, though, I don't see how anything's changed.
The silly-named Matrixx is the company that made Zicam, the zinc-based over-the-counter cold remedy that was such a big seller a few years back. You may or may not remember what brought it down - reports that some people suffered irreversible loss of their sense of smell after using the product. That's a steep price to pay for what may or may not have been any benefit at all (I never found the zinc-for-colds data very convincing, not that there were a lot of hard numbers to begin with).
This case grew out of a shareholder lawsuit, which alleged (as shareholder lawsuits do) that the company knew that there was trouble coming and had insufficiently informed its investors in time to keep them from losing buckets of their money. To get a little more specific about it, the suit claimed that Matrixx had received at least a dozen reports of anosmia between 1999 and 2003, but had said nothing about them - and more to the point, had continued to make positive statements about Zicam the whole way. The suit alleges that these statements were, therefore, false and misleading.
And that's what sent this case up the legal ladder, eventually to the big leagues of the Supreme Court. At what point does a company have an obligation to report such adverse events to the public and to its shareholders? Matrixx contended that the bar was statistical significance, and that anything short of that was not a "material event" that had to be addressed, but the Court explicitly shut that down in their decision:
"Matrixx’s premise that statistical significance is the only reliable indication of causation is flawed. Both medical experts and the Food and Drug Administration rely on evidence other than statistically significant data to establish an inference of causation. It thus stands to reason that reasonable investors would act on such evidence. Because adverse reports can take many forms, assessing their materiality is a fact-specific inquiry, requiring consideration of their source, content, and context. . .
Assuming the complaint’s allegations to be true, Matrixx received reports from medical experts and researchers that plausibly indicated a reliable causal link between Zicam and anosmia. Consumers likely would have viewed Zicam’s risk as substantially outweighing its benefit. Viewing the complaint’s allegations as a whole, the complaint alleges facts suggesting a significant risk to the commercial viability of Matrixx’s leading product. It is substantially likely that a reasonable investor would have viewed this information “ ‘as having significantly altered the “total mix” of information made available.’ "
I think that's a completely reasonable way of looking at the situation. (Note: that "total mix" language is from an earlier decision, Basic, Inc. v. Levinson, that also dealt with disclosure of material information). The other issue in this case is what the law calls scienter, broadly defined as "intent to deceive". As the decision explains, this can be assumed to hold when a reasonable person would find it as good an explanation of a defendant's actions as any other that could be drawn. And in this case, since Zicam was Matrixx's entire reason to exist, and since a link with permanent damage to a customer's sense of smell would surely damage sales immensely (which is exactly what happened), a reasonable person would indeed find that the company had a willingness to keep such information quiet.
But here's the puzzling part - not the Court's decision, which is short, clear, and unanimous, but the press coverage. This is being headlined as a defeat for Big Pharma, but I don't see it. We'll leave aside the fact that Matrixx is not exactly Big Pharma, although I'm sure that they were, for a while, making the Big Money selling Zicam. No, the thing is, this decision leaves things exactly as they were before. (Nature's "Great Beyond" blog has it exactly right).
It's not like statistical significance was the cutoff for press-releasing adverse events before, and now the Supreme Court has yanked that away. No, Matrixx was trying to raise the bar up to that point, and the Court wasn't having it. "The materiality of adverse event reports cannot be reduced to a bright-line rule", the decision says, and there was no such rule before. The Court, in fact, had explicitly refused another attempt to make such a rule in that Basic case mentioned above. No, Matrixx really had a very slim chance of prevailing in this one; current practice and legal precedent were both against them. As far as I can tell, the Court granted certiorari in this case just to nail that down one more time, which should (one hopes) keep this line of argument from popping up again any time soon.
By the way, if you've never looked at a Supreme Court decision, let me recommend them as interesting material for your idle hours. They can make very good reading, and are often (though not invariably!) well-written and enjoyable, even for non-lawyers. I don't exactly have them on my RSS feed (do they have one?), but when there's an interesting topic being decided, I've never regretted going to the actual text of the decision rather than only letting someone else tell me what it means.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | Toxicology
March 16, 2011
Well, the nuclear crisis in Japan seems to be causing a run on potassium iodide (KI), and not just in Japan. If news reports are to be believed, people in many other regions (such as the west coast of the US and Canada) are stocking up, and some of these people may have already started dosing themselves.
Don't do that. Don't do it, for several reasons. First, as the chemists and biologists in this site's readership can tell you, it's not like KI is some sort of broad-spectrum anti-radiation pill. It can protect people against the effects of radioactive iodine-131, which is a major fission product from uranium. It does that by basically swamping out the radioactive iodine a person might have been exposed to, keeping it from being taken up into the body. Iodine tends to localize in the thyroid gland, and that uptake and local concentration is the real problem. An unfolded newspaper will shield you just fine from the alpha particles that I-131 gives off, but not if it's giving them off from inside your thyroid. Correction: I-131 is a beta/gamma emitter - my apologies! The point about not wanting it in your thyroid, of course, stands. . .
And this is why potassium iodide won't do a thing to help with the other radioactive isotopes found in nuclear reactors. That includes both the uranium and/or plutonium fuel, as well as the fission products like strontium-90 and radioactive cesium. Strontium-90 is a real problem, since it tends to concentrate in the bones (and teeth), and it has a much longer half-life than I-131. Unfortunately, calcium is so ubiquitous in the body that it's not feasible to do that uptake-blocking trick the way you can with iodide. The only effective way to deal with strontium-90 is to not get exposed to it.
Another good reason not to take KI pills is that unless you're actually being exposed to radioactive iodine, it's not going to do any good at all, and can actually do you harm. Pregnant women and people with thyroid problems, especially, should not go around gulping potassium iodide. Nothing radioactive is reaching North America yet - there's the Pacific Ocean to dilute things out along the way - which makes it very likely that more people on this side are in the process of injuring themselves by taking large unnecessary doses of iodide. This is like watching people swerve their cars off the road into the trees because they've heard that there's an accident fifty miles ahead.
Now, if I were in Japan and downwind of the Fukushima reactors, I would indeed be taking potassium iodide pills, and doing so while getting the hell out of the area. (That last part, when feasible, is the absolute best protection against radioactive exposure). But here in North America, we're already the hell out of the area. The only time to take KI pills is when a plume of radioactive iodine is on the way, and that's not the case over here. We'll have plenty of notice if anything like that happens, believe me - any event that dumps enough radioactivity to make it to California will be very noticeable indeed. Let's hope we don't see anything of the kind - and in the meantime, spare a thought for those reactor technicians who are trying to keep such things from happening. Those people, I hope, will eventually have statues raised to them.
+ TrackBacks (0) | Category: Current Events | Toxicology
February 9, 2011
Thallium poisoning? Now someone in the lab has really lost it. But that seems to be what happened in New Jersey, with a chemist from Bristol-Myers Squibb accused of doing in her husband.
A note to the Newark Star-Ledger and some other newspapers: even though a hot isotope of it is occasionally used in medicine, the thallium in this case was not radioactive. It doesn't have to be; it's a good old-fashioned chemical poison. The element enters cells readily, being taken up as if it were potassium, but once it's there it starts disrupting all kinds of processes by latching on to sulfur atoms. It was good enough for Agatha Christie to use it for one of her plots, which (interestingly) seems to have publicized it enough that several other thallium plots were discovered or foiled because of her novel.
As even Wikipedia points out, thallium was "once an effective murder weapon", but the emphasis is one "once". That time is long past. Forensically, it's not the first thing that you think of, certainly, but it got picked up at autopsy in this New Jersey case. And it's not like there's any other way a person could get a high level of the element in their tissues, nor, with modern analytical techniques, can it be mistaken for anything else. Honestly, anyone who believes that they have a good chance of getting away with a thallium murder is just not thinking the whole business through.
There are no details about how the crime was done, but we can assume that some kind of soluble thallium salt was put into the victim's food. Thallium chloride is the cheapest source (as usual - Primo Levi was right when he said "chlorides are rabble"), but I'm not sure how cost-conscious the accused was. She very likely got the compound from work - and even there, it wouldn't surprise me if she had to order it up on some pretext, which will certainly make the investigation easier. Thallium's not a very common metal in organic chemistry - I've seen some uses for it, but nothing compelling enough to make me want to try it.
It's odorless and tasteless stuff, by all accounts. But it's a stupid poison. I'm not going to speculate on better methods - I haven't put that much thought into the topic, really - but there have to be some, possibly with obscure and nasty natural product toxins. Not that it's so easy to get ahold of those, but the Engineer's Triangle still applies, to murder as to everything else: Good, Fast, Cheap: Pick Any Two.
So in the end, we have what looks like a vindictive (but not very competent) poisoner, a dead victim, and all kinds of trouble and fallout for the innocent bystanders in all the families concerned. A sordid business.
+ TrackBacks (0) | Category: The Dark Side | Toxicology
January 21, 2011
Well, this is a question that (I must admit) had not crossed my mind. Courtesy of Slate, though, we can now ask how we can make pharmaceuticals more environmentally friendly. No, not the manufacturing processes: this article's worried about the drugs that are excreted into the water supply.
It's worth keeping an eye on this issue, but I haven't been able, so far, to get very worked up about it. It's true that there have been many studies that show detectable amounts of prescription drugs in the waste water stream. The possible environmental effects mentioned in the article, though, are seen at much higher concentrations. I think that much of the attention given to this issue comes from the power of modern analytical techniques -if you look for things at parts-per-billion level (or below), you'll find them. Of course, you'll also find a huge number of naturally occurring substances that are also physiologically active: can the synthetic estrogen ligands out there really compete against the huge number of phytoestrogens? I have to wonder. To me, the sanest paragraph of the article is this one:
Developing "benign-by-design" drugs poses a series of vexing challenges. In general, the qualities that make drugs effective and stable—bioactivity and resistance to degradation—are the same ones that cause them to persist disturbingly after they've done their job. And presumably even hard-core eco-martyrs (the ones who keep the thermostat at 60 all winter and renounce air travel) would hesitate to sacrifice medical efficacy for the sake of aquatic wildlife. What's more, the molecular structures of pharmaceuticals are, in the words of Carnegie Mellon chemist Terry Collins, "exquisitely specific." Typically, you can't just tack on a feature like greenness to a drug without affecting its entire design, including important medical properties.
And even that one has its problems. That "persist disturbingly" phrase makes it sound like pharmaceuticals are like little polyethylene bags fluttering around the landscape and never wearing down. But it's worth remembering that most drugs taken by humans are metabolized on their way out of the body, and most of these metabolites don't maintain the activity of the parent compound. Other organisms have similar metabolic powers - as living creatures, we've evolved a pretty robust ability to deal with constant low levels of unknown chemicals. (Here's a good chance to point out this article by Bruce Ames and Lois Swirsky Gold on that topic as it relates to cancer; many of the same points apply here).
No one can guarantee, though, that pharmaceutical residue will always be benign. As I say, it's worth keeping an eye on the possibility. But it will indeed be hard to do something about it, for just the reasons quoted above. As it is, getting a drug molecule that hits its target, does something useful when that happens, doesn't hit a lot of other things, works in enough patients to be marketable, has blood levels sufficient for a convenient dose, doesn't cause toxic effects on the side, and can be manufactured reproducibly in bulk and formulated into a stable pill. . .well, that's enough of a challenge right there. We don't actually seem to be able to do that well enough as it stands. Making the molecules completely eco-friendly at the same time. . .
+ TrackBacks (0) | Category: Drug Development | Toxicology
January 7, 2011
I wrote here about a Wall Street Journal article covering illegal street-drug labs in Europe. Well, maybe that should be not-quite-illegal, because the people involved were deliberately making compounds that the law hadn't caught up with yet.
The article mentioned David Nichols at Purdue as someone whose published work on CNS compounds had been followed/ripped off/repurposed by the street drug folks. Now Nature News has a follow-up piece by him, and he's not happy at all with the way things have been turning out:
We never test the safety of the molecules we study, because that is not a concern for us. So it really disturbs me that 'laboratory-adept European entrepreneurs' and their ilk appear to have so little regard for human safety and human life that the scant information we publish is used by them to push ahead and market a product designed for human consumption. Although the testing procedure for 'safety' that these people use apparently determines only whether the substance will immediately kill them, there are many different types of toxicity, not all of which are readily detectable. For example, what if a substance that seems innocuous is marketed and becomes wildly popular on the dance scene, but then millions of users develop an unusual type of kidney damage that proves irreversible and difficult to treat, or even life-threatening or fatal? That would be a disaster of immense proportions. This question, which was never part of my research focus, now haunts me.
Well, that's absolutely right, and it's not terribly implausible, either. The MPTP story is as good an example as you could want of what happens when you just dose whoever shows up on the street corner with that cool stuff you made in your basement lab. All we need is a side effect like that, which comes on a bit more slowly, and there you'd have it. That's one of the reasons I have such disgust for the people who are making and selling these things - they show a horrifying and stupid disregard for human life, all for the purpose of making a few bucks.
At the same time, I think that Nichols himself should try not to blame himself. His article comes across rather anguished; I have a lot of sympathy for him. But the actions of other people, especially scum, are outside of his control, and I think he's taking every reasonable precaution on his end while he does some valuable work.
Homo homini lupus: the sorts of people who see basement drugs as a fun business opportunity would likely be doing something equally stupid and destructive otherwise. Dr. Nichols, you have nothing to be ashamed of, nothing to apologize for - and, honestly, nothing to keep you up at night. You're the responsible member of the human race in this story.
+ TrackBacks (0) | Category: The Central Nervous System | The Dark Side | Toxicology
November 19, 2010
Four years after the torcetrapib disaster, Merck has released some new clinical trial data on their own CETP inhibitor, anacetrapib. It's doing what it's supposed to, when added to a statin regimen: decrease LDL even more, and strongly raise HDL.
So that's good news. . .but it would actually be quite surprising if these numbers hadn't come out that way. Pfizer's compound had already proven the CETP mechanism; their compound did the same thing at this stage of the game. The problems came later, and how. And that's where the worrying kicks in.
As far as I know, no one is still quite sure why torcetrapib actually raised the death rate slightly in its phase III treatment group. One possible mechanism was elevated blood pressure (part of a general off-target effect on the adrenals) and Merck saw no sign of that. But no matter what, we're going to have to wait for a big Phase III trial, measuring real-world cardiovascular outcomes, to know if this drug is going to fly, and we're not going to see that until 2015 at the earliest. Well, unless there's unexpected bad news at the interim - that, we'll see.
I hope it doesn't happen. If the whole LDL-bad HDL-good hypothesis is correct, you'd think that a CETP inhibitor would show a strong beneficial effect. This compound is either going to help a lot of people, or it's going to tell us something really significant that we didn't know about human lipid handling (and/or CETP). Problem is, telling us something new is almost certainly going to be the same as telling us something bad. It's still going to be a long road in this area, and good luck to everyone involved. . .
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
July 23, 2010
One big story from the last week was the FDA advisory panel's "No" decision on Qnexa, the drug-combo obesity therapy developed by Vivus. This is the one that's a combination of phentermine and topiramate, both of which have been around for a long time. And clinical trials showed that patients could indeed lose weight on the drug (with the required diet and exercise) - but also raised a lot of questions about safety.
And it's safety that's going to always be a worry with any obesity drug, even once you get past the (rather large) hurdle of showing efficacy. That's what took the Fen-Phen combination off the market, and what torpedoed Acomplia (rimonabant) and the other CB-1 compounds before they'd even been property launched. The FDA panel basically agreed that Qnexa helps with weight loss, but couldn't decide how bad the side effects might be in a wider patient population, and whether they'd be worth it:
But the drug has side effects, both known and theoretical. It may cause birth defects, it may increase suicide risk, it can cause a condition called metabolic acidosis that speeds bone loss, it increases risk of kidney stones, and may have other serious effects.
"It is difficult if not impossible to weigh these issues as the clinical trials went on only for a year, and patients will use this drug for lifetime," (panel chair Kenneth) Burman said. "It is impossible to extrapolate the trial data to the wider population."
That's a problem, all right, and it's not just Vivus that has to worry about it. When the potential number of patients is so large, well, any nasty side effects that are out there will show up eventually. How do you balance all these factors? Is it possible at all? As that WebMD article correctly points out, a new obesity drug will come on the market with all kinds of labeling about how it's only for people over some nasty BMI number, the morbidly obese, people with other life-threatening complications, and so on. But that's not how it's going to be prescribed. Not after a little while. Not with all the pent-up demand for an obesity drug.
Although that's probably the worst situation, this problem isn't confined to obesity therapies - any other drug that requires long-term dosing has this hanging over it (think diabetes, for one prominent example). That brings up the question that anyone looking over clinical trial data inevitably has to face: how much are the trials telling us about the real world? After all, the only way to be sure about how a drug will perform in millions of people for ten years is to dose millions of people for ten years. No one's going to want to pay for any drugs that have been through that sort of testing, I can tell you, so that puts us right where we are today, making judgment calls based on the best numbers we can get.
The FDA itself still has that call to make on Qnexa, and they could still approve it with all kinds of restrictive labeling and follow-up requirements. What about the other obesity compound coming along, then? A lot of people are watching Arena's lorcaserin (which I wrote about negatively here and followed up on here). Arena's stock seems to have climbed on the bad news for Vivus, but I have to say that I think that's fairly stupid. Lorcaserin may well show a friendlier side-effect profile than Qnexa, but if the FDA is going to play this tight, they could just let no one through at all - or send everyone back to the clinic for bankrupting.
As the first 5-HT2C compound to make it through, lorcaserin still worries me. A lot of people have tried that area out and failed, for one thing. And being first-to-market with a new CNS mechanism, in an area where huge masses of people are waiting to try out your drug. . .well, I don't see how you can not be nervous. I said the same thing about rimonabant, for the same reasons, and I haven't changed my opinion.
Since I got a lot of mail the last time I wrote about Arena, I should mention again that I have no position in the stock - or in any of the other companies in this space. But I could change my mind about that. If Arena runs up in advance of their FDA advisory panel in the absence of any new information, I'd consider going short (with money I could afford to lose). If I do that, I'll say so immediately.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Regulatory Affairs | The Central Nervous System | Toxicology
July 13, 2010
The New York Times has added to the arguments over Avandia (rosiglitazone) this morning, with an above-the-fold front page item on when its cardiovascular risks were first discovered. According to leaked documents, that may have been as early as the end of 1999 - just a few months after the drug had been approved by the FDA.
According to Gardiner Harris's article, SmithKline (as it was at the time) began a study that fall, and "disastrous" results were in by the end of the year that showed "clear risk" of cardiovascular effects. (They must have been disastrous indeed to show up in that short a time, I have to say). He quotes a memo from an executive at the company:
“This was done for the U.S. business, way under the radar,” Dr. Martin I. Freed, a SmithKline executive, wrote in an e-mail message dated March 29, 2001, about the study results that was obtained by The Times. “Per Sr. Mgmt request, these data should not see the light of day to anyone outside of GSK,” the corporate successor to SmithKline.
The only possible way I can see this being taken out of context would be if the rest of the memo talked about how poorly run the study was and how unreliable its data were - in which case, someone was an idiot for generating such numbers. But that puts the company in the situation of "idiots" being the most benign (and least legally actionable) explanation. Which is not where you want to be.
Without seeing the actual material, it's hard to comment further. But what's out there looks very, very bad.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | The Dark Side | Toxicology
July 12, 2010
Stuart Schreiber and Paul Clemons of the Broad Institute have a provocative paper out in JACS on natural products and their use in drug discovery. As many know, a good part of the current pharmacopeia is derived from natural product lead structures, and in many other cases a natural product was essential for identifying a target or pathway for a completely synthetic compound.
But are there as many of these cases as we think - or as there should be? This latest paper takes a large set of interaction data and tries to map natural product activities on to it. It's already know that there are genes all up and down the "interactome" spectrum, as you'd expect, with some that seem to be at the crossroads of dozens (or hundreds) of pathways, and others that are way out on the edges. And it's been found that disease targets tend to fall in the middle of this range, and not so much in the too-isolated or too-essential zones on either side.
That seems reasonable. But then comes the natural product activity overlay, and there the arguing can start. Natural products, the paper claims, tend to target the high-interaction essential targets at the expense of more specific disease targets. They're under-represented in the few-interaction group, and very much over-represented in the higher ones. Actually, that actually seems reasonable, too - most natural products are produced by organisms as essentially chemical warfare, and the harder they can hit, the better. Looking at subsets of the natural product list (only the most potent compounds, for example) did not make this effect vanish. Meanwhile, if you look at the list of approved drugs (minus the natural products on it), that group fits the middle-range interactivity group much more closely.
But what does that mean for natural products as drug leads? There would appear to be a mismatch here, with a higher likelihood of off-target effects and toxicity among a pure natural-product set. (The mismatch, to be more accurate, is between what we want exogenous chemicals to do versus what evolution has selected them to do). The paper ends up pointing out that additional sources of small molecules look to be needed outside of natural products themselves.
I'll agree with that. But I suspect that I don't agree with the implications. Schreiber has long been a proponent of "diversity-oriented synthesis" (DOS), and would seem to be making a case for it here without ever mentioning it by name. DOS is the idea of making large collections of very structurally diverse molecules, with an eye to covering as much chemical space as possible. My worries (expressed in that link above) are that the space it covers doesn't necessarily overlap very well with the space occupied by potential drugs, and that chemical space is too humungously roomy in any event to be attacked very well by brute force.
Schreiber made a pitch a few years ago for the technique, that time at the expense of small-molecule compound collections. He said that these were too simple to hit many useful targets, and now he's taking care of the natural product end of the spectrum by pointing out that they hit too many. DOS libraries, then, must be just in the right range? I wish he'd included data on some of them in this latest paper; it would be worthwhile to see where they fell in the interaction list.
+ TrackBacks (0) | Category: Drug Assays | Drug Industry History | Toxicology
May 5, 2010
You don't often get to see so direct an exchange of blows as this: Steve Nissen, of cardiology and drug-safety fame, published an editorial about GlaxoSmithKline and Avandia (rosiglitazone) earlier this year in the European Heart Journal. And GSK took exception to it - enough so that that the company's head of R&D, Moncef Slaoui, wrote to the editors with a request:
". . .(the editorial) is rife with inaccurate representations and speculation that fall well outside the realm of accepted scientiﬁc debate. We strongly disagree with several key points within the editorial, most importantly those which imply misconduct on the part of GSK and have identiﬁed some of these issues below. On this basis, GSK believes that it is necessary for the journal to withdraw this editorial from the website and refrain from publishing it in hard copy, until the journal has investigated these inaccuracies and unsubstantiated allegations.
Instead of doing that the EHJ invited Nissen to rebut GSK's views, and ended up publishing both Slaoui's letter and Nissen's reply, while leaving the original editorial up as well. (Links are PDFs, and are courtesy of Pharmalot). Looking over the exchange, I think each of the parties score some points - but I have to give the decision to Nissen, because the parts that he wins are, to my mind, more important - both for a discussion of Avandia's safety and of GSK's conduct.
For example, Slaoui disagreed strongly with Nissen's characterization of the company's relations with a coauthor of his, Dr. John Buse. Nissen referred to him as a prominent diabetes expert who had been pressured into signing an agreement barring him from publicly expressing his safety concerns, but Slaoui countered by saying:
The document that Dr Buse signed was not an agreement barring him from speaking but was a factual correction regarding data, which did not bar him from speaking at all. In fact, Dr Buse subsequently communicated his views regarding the safety of rosiglitazone to FDA.
Nissen's reply is considerably more detailed:
The intimidation of Dr John Buse by GSK was fully described in a report issued by US Senate Committee on Finance.3 The Senate Report quotes an e-mail message from Dr Buse to me dated 23 October 2005 following publication of our manuscript describing the risks of the diabetes drug muraglitazar. In that e-mail, Buse stated: ‘Steve: Wow! Great job on the muraglitazar article. I did a similar analysis of the data at rosiglitazone’s initial FDA approval based on the slides that were presented at the FDA hearings and found a similar association of increased severe CVD events. I presented it at the Endocrine Society and ADA meetings that summer. Immediately the company’s leadership contact (sic) my chairman and a short and ugly set of interchanges occurred over a period of about a week ending in my having to sign some legal document in which I agreed not to discuss this issue further in public. I was certainly intimidated by them but frankly did not have the granularity of data that you had and decided that it was not worth it’. In an e-mail to GSK, Dr Buse wrote: ‘Please call off the dogs. I cannot remain civilized much longer under this kind of heat’
This, to me, looks like a contrast between legal language and reality, and in this case, I'd say reality wins. The same sort of thing occurs when the discussion turns to the incident where a copy of Nissen's original meta-analysis of Avandia trials was faxed to GSK while it was under review at the NEJM. Nissen characterizes this as GSK subverting the editorial process by stealing a copy of the manuscript, and Slaoui strongly disagrees, pointing out that the reviewer faxed it to them on his own. And that appears to be true - but how far does that go? GSK knew immediately, of course, that this was a manuscript that they weren't supposed to have, but it was then circulated to at least forty people at the company, where it was used to prepare the public relations strategy for the eventual NEJM publication. I don't think that GSK committed the initial act of removing the manuscript from the journal's editorial process - but once it had been, they took it and ran with it, which doesn't give them much ethical high ground on which to stand.
Many other issues between the two letters are matters of opinion. Did enough attention get paid to the LDL changes seen in Avandia patients? Did the lack of hepatotoxicity (as seen in the withdrawn first drug in this class) keep people from looking closely enough at cardiac effects? Those questions can be argued endlessly. But some of GSK's conduct during this whole affair is (unfortunately for them) probably beyond argument.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | Toxicology | Why Everyone Loves Us
March 10, 2010
The Supreme Court has agreed to hear a vaccine-liability case, in an attempt to untangle conflicting lower court rulings. This all turns on the 1986 act that shields manufacturers from liability suits and a followup law that establishes a separate compensation system for injuries. A Georgia Supreme Court ruling has recently held that such suits can go on in state court, which seems to contradict other court decisions (and the intent of the 1986 law as well, you'd think).
I agree with Jim Edwards of BNET that although this particular case involves the DPT vaccine, the vaccines-cause-autism crowd will be watching this one very closely. Lawsuits will no doubt be ready to fly later this year in case the Supreme Court breaks that way - which seems to me unlikely, but I'm no judge. . .
+ TrackBacks (0) | Category: Autism | Regulatory Affairs | Toxicology
February 24, 2010
Well, this is interesting. Back when Steve Nissen was about to publish his meta-analysis on the safety of Avandia (rosigiltazone), he met with several GlaxoSmithKline executives before the paper came out. At the time, GSK was waiting on data from the RECORD study, which was trying to address the same problem (unconvincingly, for most observers, in the end). Nissen had not, of course, shown his manuscript to anyone at GSK, and for their part, the execs had not seen the RECORD data, since it hadn't been worked up yet.
Well, not quite, perhaps on both counts. As it happens, a reviewer had (most inappropriately) faxed a copy of Nissen's paper-in-progress to the company. And GSK's chief medical officer managed to refer to the RECORD study in such a way that it sounds as if he knew how it was coming out. How do we know this? Because Nissen secretly taped the meeting - legal in Ohio, as long as one party knows the taping is going on. At no point does anyone from GSK give any hint that they knew exactly what was in Nissen's paper. Here's some of it:
Dr. Krall asked Dr. Nissen if his opinion of Avandia would change if the Record trial — a large study then under way to assess Avandia’s risks to the heart — showed little risk. Dr. Krall said he did not know the results of Record.
“Let’s suppose Record was done tomorrow and the hazard ratio was 1.12. What does...?” Dr. Krall said.
“I’d pull the drug,” Dr. Nissen answered quickly.
The interim results of Record were hastily published in The New England Journal of Medicine two months later and showed that patients given Avandia experienced 11 percent more heart problems than those given other treatments, for a hazard ratio of 1.11. But the trial was so poorly designed and conducted that investigators could not rule out the possibility that the differences between the groups were a result of chance.
Somehow, I don't think that many pharma executives are going to agree to meetings with Nissen in his office in Cleveland after this. But I certainly don't blame him for making the tape, either.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | The Dark Side | Toxicology
February 22, 2010
The Senate report that leaked on Avandia (rosiglitazone) over the weekend has made plenty of headlines. It quotes an internal FDA report that recommends flatly that the drug be removed from the market, since its beneficial effects can be achieved by use of the competing PPAR drug Actos (pioglitazone), which doesn't seem to have the same cardiovascular risks. The two drugs have been compared (retrospectively) head to head, and Avandia definitely seems to have come out as inferior due to safety concerns.
There had been worries for several years about side effects, but the red flag went up for good in 2007, and the arguing has not ceased since then. According to another FDA document in the Senate report, there are "multiple conflicting opinions" inside the agency about what to do. The agency ordered GSK to set up a prospective head-to-head trial of Avandia and Actos, but other staffers insist that the whole idea is unethical. If the cardiovascular risks are real, they argue, then you can't expose people to Avandia just to find out how much worse it is. The trial is enrolling patients, but will take years to generate data, and Avandia will be generic by the time it reports, anyway. (Presumably, the only reason GSK is running it is because the drug would be taken off the market for sure if they didn't).
The FDA's internal debate is one issue here (as is the follow-up question about whether the agency should be restructured to handle these questions differently). But another one is GlaxoSmithKline's response to all the safety problems. Says that New York Times article:
In 1999, for instance, Dr. John Buse, a professor of medicine at the University of North Carolina, gave presentations at scientific meetings suggesting that Avandia had heart risks. GlaxoSmithKline executives complained to his supervisor and hinted of legal action against him, according to the Senate inquiry. Dr. Buse eventually signed a document provided by GlaxoSmithKline agreeing not to discuss his worries about Avandia publicly. The report cites a separate episode of intimidation of investigators at the University of Pennsylvania.
GlaxoSmithKline said that it “does not condone any effort to silence” scientific debate, and that it disagrees with allegations that it tried to silence Dr. Buse. Still, it said the situation “could have been handled differently.”
Well, yeah, I should think so. I don't know what the state of the evidence was as early as 1999, but subsequent events appear to have vindicated Buse and his concerns. And while you can't just sit back and let everyone take shots at your new drug, you also have to be alert to the possibility that some of the nay-sayers might be right. We honestly don't know enough about human toxicology to predict what's going to happen in a large patient population very well, and companies need to be honest with the public (and themselves) about that.
+ TrackBacks (0) | Category: Diabetes and Obesity | Regulatory Affairs | Toxicology
January 20, 2010
There's probably a lot of undiscovered information sitting out there in clinical trial data sets. And while I was just worrying the other day about people with no statistical background digging through such things, I have to give equal time to the flip side: having many different competent observers taking a crack at these numbers would, in fact, be a good thing.
Here's one effort of that sort, as detailed in Molecular Systems Biology. The authors have set up a database of all the side-effect information released through package inserts of approved drugs, which was much more of a pain than it sounds like, since the format of this information isn't standardized.
Looking over their data, the drugs with the highest number of side effects are the central nervous system agents, which makes sense. Many of these are polypharmacological; I'm almost surprised they aren't even worse by a wider margin. Antiparasitics have the fewest side effects (possibly because some of these don't even have to be absorbed?), followed by "systemic hormonal preparations". To be fair, the CNS category has the largest number of drugs in it, and those other two have the least, so this may be just a sampling problem. At a glance, one category that seems to have a disproportionate number of side effects, compared its number of approved drugs, is the "genitourinary/sex hormone" class, with muskoskeletal agents also making a stronger showing than their numbers might indicate.
+ TrackBacks (0) | Category: Clinical Trials | Toxicology
December 23, 2009
Another interesting approach to Alzheimer's therapy has just taken a severe jolt in the clinic. Elan and Transition Therapeutics were investigating ELEND005, also known as AZD-103, which was targeted at breaking down amyloid fibrils and allowing the protein to be cleared from the brain.
Unfortunately, the two highest-dose patient groups experienced a much greater number of severe events - including nine deaths, which is about as severe as things get - and those doses have been dropped from the study. I'm actually rather surprised that the trial is going on at all, but the safety data for the lowest dose (250mg twice daily) appear to justify continuing. The higher doses were 1g and 2g b.i.d., and the fact that they were going up that high makes me think that the chances of success at the lowest dose may not be very good.
So what is this drug? Oddly enough, it's one of the inositols, the scyllo isomer. Several animal studies had shown improvements with this compound, and there were promising results for Parkinson's as well. At the same time, scyllo-inositol has been implicated as a marker of CNS pathology when it's found naturally, so it's clearly hard to say just what's going on. As it always is with the brain. . .
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | The Central Nervous System | Toxicology
December 22, 2009
Courtesy of Pharmalot (and my mail!), I note this alarming story from London. GE Healthcare makes a medical NMR contrast agent, a gadolinium complex marketed under the name of Omniscan. (They picked it up when they bought Amersham a few years ago). Henrik Thomsen, a Danish physician had noted what may be an association with its use and a serious kidney condition, nephrogenic systemic fibrosis, and he gave a short presentation on his findings two years ago at a conference in Oxford.
For which GE is suing him. For libel. Update: the documents of the case can be found here. They claim that his conference presentation was defamatory, and continue to insist on damages even though regulatory authorities in both the UK and in the rest of Europe have reviewed the evidence and issued warnings about Omniscan's use in patients with kidney trouble. Over here in the US, the FDA had issued general advisories about contrast agents, but an advisory panel recently recommended that Omniscan (and other chemically related gadolinium complexes) be singled out for special warnings. From what I can see, Thomsen should win his case - I hope he does, and I hope that he gets compensatory damages from GE for wasting his time when he could have been helping patients.
And this isn't the only case going on there right now. Author Simon Singh is being sued by the British Chiropractic Association for claiming in a published article that chiropractic claims of being able to treat things like asthma as "bogus". Good for him! But he's still in court, and the end is not in sight.
This whole business is partly a function of the way that GE and the chiropractors have chosen to conduct business, but largely one of England's libel laws. The way things are set up over there, the person who brings suit starts out with a decided edge, and over the years plenty of people have taken advantage of the tilted field. There's yet another movement underway to change the laws, but I can recall others that apparently have come to little. Let's hope this one succeeds, because I honestly can't think of a worse venue to settle a scientific dispute than a libel suit (especially one being tried in London).
So, General Electric: is it now your company policy to sue people over scientific presentations that you don't like? Anyone care to go on record with that one?
+ TrackBacks (0) | Category: Analytical Chemistry | Current Events | The Dark Side | Toxicology
November 28, 2009
I asked recently for suggestions on the best books on med-chem topics, and a lot of good ideas came in via the comments and e-mail. Going over the list, the most recommended seem to be the following:
For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.
Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson.
Case histories of successful past projects are found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.
Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery.
For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.
Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.
In a related field, the standard desk reference for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons. Since all of us make a fair number of poisons (as we eventually discover), it's worth a look.
There's a first set - more recommendations will come in a following post (and feel free to nominate more worthy candidates if you have 'em).
+ TrackBacks (0) | Category: Book Recommendations | Drug Development | Life in the Drug Labs | Pharmacokinetics | The Scientific Literature | Toxicology
November 17, 2009
There's a new paper out in Nature that presents an intriguing way to look for off-target effects of drug candidates. The authors (a large multi-center team) looked at a large number of known drugs (or well-characterized clinical candidates) and their activity profiles. They then characterized the protein targets by the similarities of the molecules that were known to bind to them.
That gave a large number of possible combinations - nearly a million, actually, and in most cases, no correlations showed up. But in about 7,000 examples, a drug matched some other ligand set to an interesting degree. On closer inspection, some of these off-target effects turned out to be already known (but had not been picked up during their initial searching using the MDDR database). Many others turned out to be trivial variations on other known structures.
But what was left over was a set of 3,832 predictions of meaningful off-target binding events. The authors took 184 of these out to review them carefully and see how well they held up. 42 of these turned out to be already confirmed in the primary literature, although not reported in any of the databases they'd used to construct the system - that result alone is enough to make one think that they might be on the right track here.
Of the remaining 142 correlations, 30 were experimentally feasible to check directly. Of these, 23 came back with inhibition constants less than 15 micromolar - not incredibly potent, but something to think about, and a lot better hit rate than one would expect by chance. Some of the hits were quite striking - for example, an old alpha-blocker, indoramin, showed a strong association for dopamine receptors, and turned out to be an 18 nM ligand for D4, which is better than it does on the alpha receptors themselves. In general, they uncovered a lot of new GPCR activities for older CNS drugs, which doesn't surprise me, given the polypharmacy that's often seen in that area.
But they found four examples of compounds that jumped into completely new target categories. Rescriptor (delavirdine), a reverse transcriptase inhibitor used against HIV, showed a strong score against histamine subtypes, and turned out to bind H4 at about five micromolar. That may not sound like much, but the drug's blood levels make that a realistic level to think about, and its side effects include a skin rash that's just what you might expect from such off-target binding.
There are some limitations. To their credit, the authors mention in detail a number of false positives that their method generated - equally compelling predictions of activities that just aren't there. This doesn't surprise me much - compounds can look quite similar to existing classes and not share their activity. I'm actually a bit surprised that their methods works as well as it does, and look forward to seeing refined versions of it.
To my mind, this would be an effort well worth some collaborative support by all the large drug companies. A better off-target prediction tool would be worth a great deal to the whole industry, and we might be able to provide a lot more useful data to refine the models used. Anyone want to step up?
Update: be sure to check out the comments section for other examples in this field, and a lively debate about which methods might work best. . .
+ TrackBacks (0) | Category: Drug Assays | In Silico | Toxicology
September 22, 2009
Senator Charles Grassley of Iowa has sent the FDA a letter asking if the agency has sufficiently considered adverse events from statin drugs. I've been unable to find the text of the letter, but here's a summary at Business Week. (Grassley's own list of press releases, like most other senators and representatives, is a long, long list of all the swag and booty that he's been able to cart back to his constituents.
His main questions seem to be: has the agency seen any patterns in adverse event reports? Is there reason to believe that such events are being under-reported? Is there information from other countries where the drugs are prescribed that might tell us things that we're missing here?
Business Week's reporter John Carey has been on this are-statins-worse-than-they-appear beat for some time now, and it wouldn't surprise me if someone from Grassley's office sent him a copy of the Senator's letter on that basis. Those considerations aside, are statins really worse than they appear, or not?
The muscle side effects of the drugs (rhabdomyolysis) have been known for some time, and it's clear that some patients are more sensitive to this than others. But there are other possible side effects kicking around, such as cognitive impairment. The evidence for that doesn't seem very strong to me, at first glance, and could (as far as I can see) come out the other way just as easily. In the same way, I haven't seen any compelling evidence for increased risk of cancer, although it's quite possible that they may have effects (good and bad) when combined with existing therapies.
The one thing that you can say is that the epidemiological data we have for statin treatment is probably about as good as we're going to get for anything. These drugs are so widely prescribed, and have now been on the market for so many years, that the amount of data collected on them is huge. If that data set is inadequate, then so are all the others. I'm not sure what Sen. Grassley is up to with his letter, but that's something he should probably keep in mind. . .
+ TrackBacks (0) | Category: Cardiovascular Disease | Regulatory Affairs | Toxicology
August 28, 2009
I wrote years ago on this blog about REACH, the European program to (as the acronym has it) Register, Evaluate, Authorize and Restrict Chemical substances. (I'm not sure where that second R got off to in there). This is a massive effort to do a sort of catch-up for chemicals that were introduced before modern regulatory regimes, and it involves fresh toxicological investigations and an absolute blizzard of paperwork. This program was launched in 2006, after years of wrangling, and the last few years have been spent in yet more wrangling about its implementation.
The worried voices are getting louder. Thomas Hartung (a toxicologist at Johns Hopkins and the University of Konstanz) and his co-author, Italian chemist Costanza Rovida, now say that the program is heading off the cliff. (Their full report is here as a PDF). In Nature, the authors have a commentary that summarizes their findings. They estimate that around 68,000 chemical substances will fall under the program, and when they run the numbers on how those will need to be tested, well. . .
"Our results suggest that generating data to comply with REACH will require 54 million vertebrate animals and cost 9.5 billion Euros over the next 10 years. This is 20 times more animals and 6 times the costs of the official estimates. By comparison, some 90,000 animals are currently used every year for testing new chemicals in Europe, costing the industry some 60 million Euros per year. Without a major investment into high-throughput methodologies, the feasibility of the programme is under threat — especially given that our calculations represent a best-case scenario. In 15 months' time, industry has to submit existing toxicity data and animal-testing plans for the first of three groups of old chemicals."
These are staggering numbers. There are not enough labs, not enough toxicologists, and not enough rats (well, usable rats) in Europe to even come close to realizing such an effort. It turns out that the biggest expense, on both the animal and money counts, is reproductive toxicity testing, which is apparently being mandated into the second generation of rodents. That works out to an average of 3,200 rats sacrificed per chemical evaluated, so you can see how things get out of hand. The authors are calling for an immediate re-evaluation of the reproductive toxicity testing protocols, arguing that the cost/benefit ratio is wildly out of whack, and that the rate of false positives (especially involving second-generation studies) is high enough to end up scaring a lot of people for no sound reason at all.
I'm absolutely with them on this. The program seems like one of these "No cost is too high for absolute safety" ideas that make politicians and regulators happy, but don't do nearly as much good for society as you'd think. (It's worth noting that Hartung and Rovida actually support the idea of REACH, but think that its implementation has gone off the rails). One beneficial side effect, as the authors mention, is that the whole mess will probably end up advancing the state of the art in toxicology a good deal, partly in ways to figure out how to avoid the coming debacle.
Not suprisingly, the European Chemicals Agency is disputing the study, saying that they don't anticipate the numbers of chemicals registered (or the costs associated with studying them) to differ much from their estimates. If I can suggest it, though, I would like to mention that the history of large regulatory programs in general does not provide much support for that optimistic forecast. At all. To put it in the mildest possible terms. We'll see who's right, though, won't we?
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
August 24, 2009
Eli Lilly announced some bad news last week when they dropped arzoxifene, a once-promising osteoporosis treatment (and successor to Evista (raloxifene), which has been one of the company's big successes).
If this drug had been found ten or fifteen years ago, it might have made it though. But the trial data showed that while it made its primary endpoints (reducing vertebral fractures, for example), it missed several secondary ones (such as, well, non-vertebral fractures). And the side effect profile wasn't good, either. That combination meant that the drug was going to face at hard time at the FDA for starters, and even if it somehow got through, it would face a hard time competing with generic Fosamax (and Lilly's own Evista).
So down it went, and it sound like the right decision to make. Unfortunately, given the complexities of estrogen receptor signaling, the clinic is the only place that you can find out about such things. And there are no short, inexpensive clinical trials in osteoporosis, so the company had to run one of the big, expensive ones only to find out that arzoxifene didn't quite measure up. That's why this is a territory for the deep-pocketed, or (at the very least) for those who hope to do a deal with them at the first opportunity.
One more point is worth emphasizing. Take a look at the structures of the two compounds (from those Wikipedia links in the first paragraph). Pretty darn similar, aren't they? Arzoxifene is clearly a follow-up drug in every way - modified a bit here and there, but absolutely in the same family. A "me-too" drug, in other words, an attempt to come up with something that works similarly but sands off some of the rough edges of the previous compound. But anyone who thinks that development of a follow-up compound is easy - and a lot of people outside the industry do - should consider what happened to this one.
+ TrackBacks (0) | Category: "Me Too" Drugs | Clinical Trials | Drug Development | Toxicology
June 23, 2009
What's really going on with Medarex and ipilimumab? The company made news over the weekend with a press release from the Mayo Clinic, detailed what appears to be a substantial response in two prostate cancer patients. But the more you look at the story, the harder it is to figure out anything useful.
As this WebMD piece makes clear, this study is not a trial of ipilimumab as a single agent. The patients are undergoing prolonged androgen ablation, the testosterone-suppressing therapy that's been around for many years and is one of the standard options for prostate cancer. The trial is to see if ipilimumab has any benefit when it's added to this protocol - basically, to see if it can advance the standard of care a bit.
WebMD quotes Derek Raghavan at the Cleveland Clinic as saying that androgen ablation can sometimes have dramatic results in patients with locally advanced prostate cancer, so it's impossible to say if ipilimumab is helping or not. That's why we run clinical trials, you know, to see if there's a real effect across a meaningful number of patients. But (as this AP story notes) we don't know how many patients are in this particular study, what its endpoints are, or really anything about its design. All we know is that two patients opted out of it for surgery instead. (Credit goes to the AP's Linda Johnson for laying all this out).
Ipilimumab is an antibody against CTLA-4, which is an inhibitory regulator of lymphocytes. Blocking it should, in theory, turn these cells loose to engage tumor cells more robustly. (It also turns them loose to engage normal tissue more robustly, too - most of the side effects seem to be autoimmune responses like colitis, which can be very severe. The antibody has been studied most thoroughly in melanoma, where it does seem to be of value, although the side effect profile is certainly complicating things.
So overall, I think it's way too early to conclude that Medarex has hit on some miracle prostate cure. This press release, in fact, hasn't been too helpful at all, and the Mayo people really should know better.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Press Coverage | Toxicology
May 6, 2009
Here's a good example of why all of us in the industry tiptoe into Phase I trials, the first-in-man studies. A company called SGX, recently acquired by Eli Lilly, has been developing a kinase inhibitor (SGX523) targeting the enzyme cMET. That's a well-known anticancer drug target, with a lot of activity going on in the space.
SGX's specialty is fragment-based design, and they've spoken several times at meetings about the SGX523 story. The starting point for the drug seems to have come out of X-ray crystallographic screening (the company has significant amounts of X-ray synchrotron beamline time, which you're going to need if you choose this approach). They refined the lead, in what (if you believe their presentations) was a pretty short amount of time, to the clinical candidate. It seems to have had reasonable potency and pharmacokinetics, very good oral bioavailability, no obvious liabilities with metabolizing enzymes or the dreaded hERG channel. And it was active in the animal models, however much you can trust that in oncology.
So off to the clinic they went. Phase I trials started enrolling patients in January of last year - but by March, the company had to announce that all dosing had been halted. That was fast, but there was a mighty good reason. The higher doses were associated with acute renal failure, something that most certainly hadn't been noticed in the mouse models, or the rats, or the dogs. It turns out that the compound (or possibly a metabolite, it's not clear to me) was crystallizing out in the kidneys. Good-looking crystals, too, I have to say. I can't usually grow anything like that in the lab; maybe I should try crystallizing things out from urine.
Needless to say, obstructive nephropathy is not what you look for in a clinical candidate. There's no market for instant kidney stones, especially when they appear all over the place at the same time. The patients in the Phase I trial did recover; kidney function was restored after dosing was stopped and the compound had a chance to wash out. But SGX523, which was (other than its unlovely structure) a perfectly reasonable-looking drug candidate, is dead. It didn't take long.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Toxicology
May 5, 2009
Back when I joined the first drug company I ever worked for, the group in the lab next door was working on an enzyme called ACAT, acyl CoA:cholesterol acyltranferase. It’s the main producer of cholesterol esters in cells, and is especially known to be active in the production of foam cells in atherosclerosis. It had already been a drug target for some years before I first heard about it, and has remained one.
It hasn’t been an easy ride. Since 1990, several compounds have failed in the clinic or in preclinical tox testing. The most recent disappointment was in 2006, when pactimibe (Daiichi Sankyo) not only failed to perform against placebo, but actually made things slightly worse.
Lipid handling is a tough field, because every animal does is slightly differently. There are all sorts of rabbit strains and hamster models and transgenic mice, but you're never really sure until you get to humans. Complicating the story has been the discovery that there are two ACATs. ACAT-1 is found in macrophages (and the foam cells that they turn into) and many other tissues, and ACAT-2 is found in the intestine and in the liver. Which one to inhibit is a good question - the first might have a direct effect on altherosclerotic plaque formation, while the second could affect general circulating lipid levels. Pactimibe hits both about equally, as it turns out.
Now a second study of that drug has been published this spring. This one was going on at the same time as the earlier reported one, and was stopped when those results hit, but the data were in good enough shape to be worked up, and the company paid for the continued analysis. The new results look at patients with familial hypercholesterolemia, who got pactimibe along with the standard therapies. Unfortunately, the numbers are of a piece with the earlier ones: the drug did not help, and actually seemed to increase arterial wall thickness. I think it's safe to say, barring some big pharmacological revelation, that ACAT inhibitors are a dead end for atherosclerosis.
I bring this up for two reasons. One is that the group that was working next door to me on ACAT was the same group that discovered (quite by accident) the cholesterol absorption inhibitor ezetimibe, known as Zetia (and as half of Vytorin). Although its future is very much in doubt, it's for sure that that compound has been a lot more successful than any ACAT inhibitor. The arguing goes on about how helpful it's been (and will go on until we see the next trial results for another couple of years), but it's already made it further than ACAT.
And that's actually my second point. I suspect that almost no one in the general public has ever heard of ACAT at all. But it's been the subject of a huge amount of research, of time and work and money. And while we've learned more about lipid handling in humans, which is always valuable, the whole effort has been an utter loss as far as any financial return. I have no good way of estimating the direct costs (and even worse, the opportunity costs) involved with this target, but they surely add up to One Hell Of A Lot Of Money. Which is gone, and gone with hardly a sound outside the world of drug development. And this happens all the time.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | Toxicology
May 1, 2009
One of Merck’s less wonderful recent experiences was the rejection of Cordaptive, which was an attempt to make a niacin combination for the cardiovascular market. Niacin would actually be a pretty good drug to improve lipid profiles if people could stand to take the doses needed. But many people experience a burning, itchy skin flush that’s enough to make them give up on the stuff. And that’s too bad, because it’s the best HDL-raising therapy on the market. It also lowers LDL, VLDL, free fatty acids, and tryglycerides, which is a pretty impressive spectrum. So it’s no wonder that Merck (and others) have tried to find some way to make it more tolerable.
A new paper suggests that everyone has perhaps been looking in the wrong place for that prize. A group at Duke has found that the lipid effects and the cutaneous flushing are mechanistically distinct, way back at the beginning of the process. There might be a new way to separate the two.
Niacin’s target seems to be the G-protein coupled receptor GPR109A – and, unfortunately, that seems to be involved in the flushing response, since both that and the lipid effects disappear if you knock out the receptor in a mouse model. The current model is that activation of the receptor produces the prostaglandin PGD2 (among other things), and that’s what does the skin flush, when it hits its own receptor later on. Merck’s approach to the side effect was the block the PGD2 receptor by adding an antagonist drug for it along with the niacin. But taking out the skin flush at that point means doing it at nearly the last possible step.
The Duke team has looked closely at the signaling of the GPR109A receptor and found that beta-arrestins are involved (they’ve specialized in this area over the last few years). The arrestins are proteins that modify receptor signaling through a variety of mechanisms, not all of which are well understood. Wew’ve known about signaling through the G-proteins for many years (witness the name of the whole class of receptors), but beta-arrestin-driven signaling is a sort of alternate universe. (GPCRs have been developing quite a few alternate universes – the field was never easy to understand, but it’s becoming absolutely baroque).
As it turns out, mice that are deficient in either beta-arrestin 1 or beta-arrestin 2 show the same lipid effects in response to niacin dosing as normal mice. But the mice lacking much of their beta-arrestin 1 protein show a really significant loss of the flushing response, suggesting that it’s mediated through that signaling pathway (as opposed to the “normal” G-protein one). And a known GPR109A ligand that doesn’t seem to cause so much skin flushing (MK-0354) fit the theory perfectly: it caused G-protein signaling, but didn’t bring in beta-arrestin 1.
So the evidence looks pretty good here. This all suggests that screening for compounds that hit the receptor but don’t activate the beta-arrestin pathway would take you right to the pharmacology you want. And I suspect that several labs are going to now put that idea to the test, since beta-arrestin assays are also being looked at in general. . .
+ TrackBacks (0) | Category: Biological News | Cardiovascular Disease | Toxicology
February 26, 2009
Metformin, now there’s a drug story for you. It’s a startlingly small molecule, the sort of thing that chemists look and and say “That’s a real drug?” It kicked around in the literature and the labs in the 1960s, was marketed in Europe in the 1980s but was shopped around in the US for quite a while, partly because a lot of people had just that reaction. (It didn't help that a couple of other drugs in the same structural class turned out to cause lactic acidosis and had to be pulled from use). Bristol-Myers Squibb finally took metformin up, though, and did extremely well with it in the end under the brand name Glucophage. It’s now generic, and continues to be widely prescribed for Type II diabetes.
But for many years, no one had a clue how it worked. It not only went all the way through clinical trials and FDA approval without a mechanism, it was nearly to the end of its patent lifetime before a plausible mechanism became clear. It’s now generally accepted that metformin is an activator (somehow, maybe through another enzyme called LKB1) of adenosine monophosphate kinase (AMPK), and that many (most?) of its effects are probably driven through that pathway. AMPK’s a central player in a lot of metabolic processes, so this proposal is certainly plausible.
But never think that you completely understand these things (and, as a corollary, never trust anyone who tries to convince you that they do). A new paper in PNAS advances the potentially alarming hypothesis that metformin may actually exacerbate the pathology of Alzheimer’s disease. This hasn’t been proven in humans yet, but the evidence that the authors present makes a strong case that someone should check this out quickly.
There’s a strong connection between insulin, diabetes, and brain function. Actually, there are a lot of strong connections, and we definitely haven’t figured them all out yet. Some of them make immediate sense – the brain pretty much has to run on glucose, as opposed to the rest of the body, which can largely switch to fatty acids as an energy source if need be. So blood sugar regulation is a very large concern up there in the skull. But insulin has many, many more effects than its instant actions on glucose uptake. It’s also tied into powerful growth factor pathways, cell development, lifespan, and other things, so its interactions with brain function are surely rather tangled.
And there’s some sort of connection between diabetes and Alzheimer’s. Type II diabetes is considered to be a risk factor for AD, and there’s some evidence that insulin can improve cognition in patients with the disease. There’s also some evidence that the marketed PPAR-gamma drugs (the thiazolidinediones rosiglitazone and pioglitazone) have some benefit for patients with early-stage Alzheimer’s. (Nothing, as far as I’m aware, is of much benefit for people with late-stage Alzheimer’s). Just in the past month, more work has appeared in this area. The authors of this latest paper wanted to take a look at metformin from this angle, since it’s so widely used in the older diabetic population.
What came out was a surprise. In cell culture, metformin seems to increase the amount of beta-amyloid generated by neurons. If you buy into the beta-amyloid hypothesis of Alzheimer’s, that’s very bad news indeed. (And even people that don’t think that amyloid is the proximate cause of the disease don’t think it’s good for you.) It seems to be doing this by upregulating beta-secretase (BACE), one of the key enzymes involved in producing beta-amyloid from the larger amyloid precursor protein (APP). And that upregulation seems to be driven by AMPK, but independent of glucose and insulin effects.
The paper takes this pretty thoroughly through cell culture models, and at the end all the way to live rats. They showed small but significant increases in beta-secretase activity in rat brain after six days of metformin treatment. And the authors conclude that:
Our finding that metformin increases A-beta generation and secretion raises the concern of potential side-effects, of accelerating AD clinical manifestation in patients with type 2 diabetes, especially in the aged population. This concern needs to be addressed by direct testing of the drug in animal models, in conjunction with learning, memory and behavioral tests.
Unfortunately, I think they’re quite right. Update - in response to questions, it appears that metformin may well cross into the brain, presumably at least partly by some sort of active transport. There's some evidence both ways, but it's certainly possible that relevant levels make it in. With any luck, this will be found not to translate to humans, or not with any real clinical effect, but someone’s going to have to make sure of that. For those of us back in the early stages of drug discovery, the lesson is (once again): never, never think we completely understand what a drug is doing. We don’t.
+ TrackBacks (0) | Category: Alzheimer's Disease | Diabetes and Obesity | Drug Industry History | Toxicology
February 11, 2009
+ TrackBacks (0) | Category: Book Recommendations | Drug Development | Pharmacokinetics | Toxicology
December 2, 2008
Ever since the catastrophic failure of Pfizer's HDL-raising CETP inhibitor torcetrapib in late 2006, everyone involved has wondered just what the problem was. There was a definitely higher cardiovascular-linked death rate in the drug-treatment group as opposed to placebo - which led to the screeching halt in Phase III, as well it might - but why? Is there something unexpectedly bad about raising HDL? Or just in raising it by inhibiting the CETP enzyme, which might well provide a different lipoprotein profile than other high-HDL ideas? Was it perhaps an off-target effect of the drug that had nothing to do with its mechanism? And for any of these possibilities, is there the possibility of a biomarker that could warn of approaching trouble?
There are now two analyses of clinical data that may shed some light on these questions (thanks to Heartwire for details and follow-up). The first, a new analysis from Holland of the RADIANCE trial data, shows an electrolyte imbalance (low potassium and higher sodium) in the treatment group. Measuring carotid wall thickness, they found no correlation between the degree of HDL elevation and progress of disease, which is disturbing. The only correlation was with lower LDL levels, and the authors point out that torcetrapib has unappreciated LDL-lowering activity. (Of course, there are easier and more proven ways to do that!)
The second, the ultrasound-monitored trial called ILLUSTRATE led by the Cleveland Clinic, actually did show a correlation between HDL levels and disease progression, as measured by PAV (per cent atheroma volume). This paper concludes that the drug did perform mechanistically, but that needs some qualification. Overall, there was no real significant change in PAV, but looking more closely, the individual changes did seem to correlate with the amount of HDL elevation each group of patients achieved. Only the very highest-responding group showed any regression, though.
Interestingly, this study also showed the same sort of electrolyte imbalance, and both teams seem to agree that torcetrapib is showing off-target mineralcorticoid effects. Steve Nissen of the Cleveland group is more optimistic (a phrase one doesn't get to write every day). He thinks that a CETP inhibitor that doesn't hit the adrenals might still find a place - but I have to say, looking over the data, that it sure won't be the place that the companies involved were hoping for. Instead of being world-conquering cardiovascular wonder drugs, perhaps the best this class of compounds can hope for is a niche, perhaps alongside statin therapy. I just don't see how this level of efficacy translates into something all that useful.
But we'll see. Merck's anacetrapib is still going along. The data we have so far suggest that the compound raises HDL without effects on blood pressure, as opposed to torcetrapib. So maybe (for whatever reason - blind luck, I'd say) this compound doesn't do anything to the aldosterone pathway. But does it do anything to atherosclerosis? That's the question, and that's what the big money will have to be spent on in Phase III to find out. A comment at the Wall Street Journal's Helath Blog has it right:
Welcome to the challenges of pharmaceutical research. Pharmacogenomic evidence originally led Pfizer to hope that elevating HDL through inhibiting CETP would be beneficial. A biomarker assessment in patients suggests that plaque reduction is associated with the highest HDL elevations. Yet, with torcetrapib, there appears to be a safety biomarker popping up. Are either the efficacy or safety signals really biomarkers of long term clinical outcome? You only need to ante up $800M to run mortality and morbidity trials for 5 or more years. Any investors?
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
November 25, 2008
Avandia (rosiglitazone) has been under suspicion for the last couple of years, after data appeared suggesting a higher rate of cardiovascular problems with its use. GlaxoSmithKline has been disputing this association all the way, as well they might, but today there’s yet more information to dispute.
A retrospective study in the Archives of Internal Medicine looked at about 14,000 patients on Medicare (older than 65) who were prescribed Avandia between 2000 and 2005. Now, looking backwards at the data is always a tricky business. For example, comparing these patients to another group that didn’t get the drug could be quite misleading – the obvious mistake there is that if someone has been prescribed Avandia, then they’re likely getting it because they’ve got Type II diabetes (or metabolic syndrome at least). Comparing that cohort to a group that isn’t showing such symptoms would be wildly misleading.
But this study compared the Avandia patients to 14,000 who were getting its direct competitor, Actos (pioglitazone). Now that’s more like it. The two drugs are indicated for the same patient population, for the same reasons. Their mechanism of action is supposed to be the same, too, as much as anyone can tell with the PPAR-gamma compounds. I wrote about that here – the problem with these drugs is that they affect the transcription of hundreds of genes, making their effects very hard to work out. Rosi and pio overlap quite a bit, but there are definitely (PDF) genes that each of them affect alone, and many others that they affect to different levels. Clinically, though, they are in theory doing the exact same thing.
But are they? This study found that the patients who started on Avandia had a fifteen per cent higher deaths-from-all-causes rate than the Actos group. To me, that’s a startlingly high number, and it really calls for an explanation. The Avandia group had a 13 per cent higher rate of heart failure, but no difference in strokes and heart attack, oddly. The authors believe that these latter two causes of death are likely to be undercounted in this population, though – there’s a significant no-cause-reported group in the data.
The authors also claim that the two populations were “surprisingly similar”, strengthening their conclusions. I think that that’s likely to be the case, given the similarities between the two drugs. GlaxoSmithKline, for their part, is saying that these numbers don’t match the safety data they’ve collected, and that a randomized clinical trial is the best way to settle such issues.
Well, yeah: a randomized clinical trial is the best way to settle a lot of medical questions. But neither GSK (nor Takeda and Lilly, makers of Actos) have seen fit to go head-to-head in one, have they? My guess is that both companies felt that the chances of showing a major clinical difference between the two was small, and that the size, length, and expense of such a trial would likely not justify its results. And if we’re talking about the beneficial mechanisms of action here, that’s probably true. You’d have quite a time showing daylight between the two drugs on things like insulin sensitivity, glycosylated hemoglobin, and other measures of diabetes. Individual patients may well show differences, and that's useful in practice - but that's a hard thing to show in a large averaged set of data. But how about nasty side effects? Maybe there's some room there - but in a murky field like PPAR-gamma, you'd have to have a lot of nerve to run a trial hoping to see something bad in your competitor's compound, while still being sure enough of your own. No, it's disingenuous to talk about how these questions need to be answered by a clinical trial, when you haven't done one, haven't planned one, and have (what seemed to be) good reasons not to.
This kind of study is the best rosi-to-pio comparison we're likely to get. And it does not look good for Avandia. GSK is going to have to live with that - and in fact, they already are.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Toxicology
November 17, 2008
There was a legal ruling last week in California that we’re going to hear a lot more of in this business. Conte v. Wyeth. This case involved metaclopramide, which was sold by Wyeth as Reglan before going off-patent in 1982. The plaintiff had been prescribed the generic version of the drug, was affected by a rare and serious neurological side effect (tardive dyskinesia, familiar to people who’ve worked with CNS drugs) and sued.
But as you can see from the name of the case, this wasn’t a suit against her physician, or against the generic manufacturer. It was a suit against Wyeth, the original producer of the drug, and that’s where things have gotten innovative. As Beck and Herrmann put it at the Drug and Device Law Blog:
The prescribing doctor denied reading any of the generic manufacturer's warnings but was wishy-washy about whether he might have read the pioneer manufacturer's labeling at some point in the more distant past.
Well, since the dawn of product liability, we thought we knew the answer to that question. You can only sue the manufacturer of the product that injured you. Only the manufacturer made a profit from selling the product, and only the manufacturer controls the safety of the product it makes, so only the manufacturer can be liable.
Not any more, it seems. The First District Court of Appeals in San Francisco ruled that Wyeth (and other drug companies) are also liable for harm caused by the generic versions of their drugs. At first glance, you might think “Well, sure – it’s the same drug, and if it causes harm, it causes harm, and the people who put it on the market should bear responsibility”. But these are generic drugs we’re talking about here – they’ve already been on the market for years. Their behavior, their benefits, and their risks are pretty well worked out by the time the patents expire, so we’re not talking about something new or unexpected popping up. (And in this case, we're talking about a drug that has been generic for twenty-six years).
The prescribing information and labeling has been settled for a long time, too, you’d think. At any rate, that’s worked out between the generic manufacturers and the FDA. How Wyeth can be held liable for the use of a product that it did not manufacture, did not label, and did not sell is a mystery to me.
Over at Law and More, a parallel is drawn between this ruling and the history of public nuisance law during the controversy over lead paint; the implication is that this ruling will stand up and be with us for a good long while. But at Cal Biz Lit, the betting is that “this all goes away at the California Supreme Court”. We’ll see, because that’s exactly where it’s headed and maybe beyond that, eventually.
And if this holds up? Well, Beck and Herrmann lay it out in their extensive follow-up post on the issue, which I recommend to those with a legal interest:
Conte-style liability can only drive up the cost of new drugs – all of them. Generic drugs are cheaper precisely because their manufacturers did not incur the cost of drug development – costs which run into the hundreds of millions of dollars for each successful FDA approval. Because they are cheap, generics typically drive the pioneer manufacturer’s drug off the market (or into a very small market share) within a few years, if not sooner. Generic drugs will stay cheap under Conte. But imposing liability in perpetuity upon pioneer manufacturers for products they no longer sell or get any profit from means that the pioneer manufacturers (being for-profit entities) have to recoup that liability expense somewhere. There’s only one place it can come from. That’s as an add-on to the costs of new drugs that still enjoy patent protection.
Exactly right. This decision establishes a fishing license for people to go after the deepest-pocketed defendents. Let’s hope it’s reversed.
+ TrackBacks (0) | Category: Regulatory Affairs | The Central Nervous System | Toxicology
November 14, 2008
So, you’re making an enzyme inhibitor drug, some compound that’s going to go into the protein’s active site and gum up the works. You usually want these things to be potent, so you can be sure that you’ve knocked down the enzyme, so you can give people a tiny, convenient pill, and so you don’t have to make heaps of the compound to sell. How potent is potent? And how potent can you get?
Well, we’d like nanomolar. For the non-chemists in the crowd, that’s a concentration measure based on the molecular weight of the compound. If the molecular weight of the drug is 400, which is more typical than perhaps it should be, then 400 grams of the stuff is one mole. And 400 grams dissolved
in a liter of solvent to make a liter of solution would then give you a one molar (1 M) solution. (The original version of this post didn't make that important distinction, which I'll chalk up to my not being completely awake on the train ride first thing in the morning. The final volume you get on taking large amounts of things up in a given amount of solvent can vary quite a bit, but concentration is based, naturally, on what you end up with. And it’s a pretty flippin’ unusual drug substance than can be dissolved in water to that concentration, let me tell you right up front). So, four grams in a liter would be 0.01 M, or 10 millimolar, and foru hundred milligrams per liter would be a 1 millimolar solution. A one micromolar solution would be 400 micrograms (0.0004 grams) per liter, and a one nanomolar solution would be 400 nanograms (400 billionths of a gram) per liter. And that’s the concentration that we’d like to get to show good enzyme inhibition. Pretty potent, eh?
But you can do better – if you want to, which is a real question. Taking it all the way, your drug can go in and attach itself to the active site of its target by a real chemical bond. Some of those bond-forming reactions are reversible, and some of them aren’t. Even the reversible ones are a lot tighter than your usual run of inhibitor.
You can often recognize them by their time-dependent inhibition. With a normal drug, it doesn’t take all that long for things to equilibrate. If you leave the compound on for ten, twenty, thirty minutes, it usually doesn’t make a huge difference in the binding constant, because it’s already done what it can do and reached the balance it’s going to reach. But a covalent inhibitor, that’ll appear to get more and more potent the longer it stays in there, since more and more of the binding sites are being wiped out. (One test for reversibility after seeing that behavior is to let the protein equilibrate with fresh blank buffer solution for a while, to see if its activity ever comes back). You can get into hair-splitting arguments if your compound binds so tightly that it might as well be covalent; at some point they're functionally equivalent.
There are several drugs that do this kind of thing, but they’re an interesting lot. You have the penicillins and their kin – that’s what that weirdo four-membered lactam ring is doing, spring-loaded for trouble once it gets into the enzyme. The exact same trick is used in Alli (orlistat), the pancreatic lipase inhibitor. And there are some oncology drugs that covalently attach to their targets (and, in some cases, to everything else they hit, too). But you’ll notice that there’s a bias toward compounds that hit bacterial enzymes (instead of circulating human ones), don’t get out of the gut, or are toxic and used as a last resort.
Those classes don’t cover all the covalent drugs, but there’s enough of that sort of thing to make people nervous. If your compound has some sort of red-hot functional group on it, like some of those nasty older cancer compounds, you’re surely going to mess up a lot of other proteins that you would rather have left alone. And what happens to the target protein after you’ve stapled your drug to it, anyway? One fear has been that it might present enough of a different appearance to set off an immune response, and you don’t want that, either.
But covalent inhibition is actually a part of normal biochemistry. If you had a compound with a not-so-lively group, one that only reacted with the protein when it got right into the right spot – well, that might be selective, and worth a look. The Cravatt lab at Scripps has been looking into what kinds of functional groups react with various proteins, and as we get a better handle on this sort of thing, covalency could make a comeback. Some people maintain that it never left!
+ TrackBacks (0) | Category: Drug Assays | Toxicology
October 17, 2008
Here's a good article over at the In Vivo Blog on this year's crop of expensive Phase III failures. They've mostly been biotech drugs (vaccines and the like), but it's a problem everywhere. As In Vivo's Chris Morrison puts it:
Look, drugs fail. That happens because drug development is very difficult. Even Phase III drugs fail, probably more than they used to, thanks to stiffer endpoints and attempts to tackle trickier diseases. Lilly Research Laboratory president Steve Paul lamented at our recent PSA meeting that Phase III is "still pretty lousy," in terms of attrition rates -- around 50%. And not always for the reasons you'd expect. "You shouldn't be losing Phase III molecules for lack of efficacy," he said, but it's happening throughout the industry.
Ah, but efficacy has come up in the world as a reason for failure. Failures due to pharmacokinetics have been going down over the years as we do a better job in the preclinical phase (and as we come up with more formulation options). Tox failures are probably running at their usual horrifying levels; I don't think that those have changed, because we don't understand toxicology much better (or worse) than we ever did.
But as we push into new mechanisms, we're pushing into territory that we don't understand very well. And many of these things don't work the way that we think that they do. And since we don't have good animal models - see yesterday's post - we're only going to find out about these things later on in the clinic. Phase II is where you'd expect a lot of these things to happen, but it's possible to cherry-pick things in that stage to get good enough numbers to continue. So on you go to Phase III, where you spend the serious money to find out that you've been wrong the whole time.
So we get efficacy failures (and we've been getting them for some time - see this piece from 2004). And we're getting them in Phase III because we're now smart and resourceful enough to worm our way through Phase II too often. The cure? To understand more biology. That's not a short-term fix - but it's the only one that's sure to work. . .
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology
October 2, 2008
Merck has taken a step that many people have been expecting, and announced that they are no longer developing taranabant, their cannabinoid antagonist (or is it an inverse agonist?)
I'd expressed grave doubts about the drug earlier this year, which turned out to be well-founded. That latter post included the line "I don't see how they can get this compound through the FDA", and now Merck seems to have come to the same conclusion. Further clinical data seem to have shown far too many psychiatric side effects (anxiety, depression, and so on), which increased along with the dose of the drug.
The cannabinoid antagonist field has already experienced a crisis of confidence after Sanofi-Aventis's rimonabant failed to gain approval in the US. This latest news should ensure that no company tries to develop one of these drugs until we've learned a great deal more about their pharmacology. Given how little we know about the mechanisms of these mental processes, though, that could take a long, long time. We can pull the curtain over this area, I think.
+ TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | The Central Nervous System | Toxicology
July 22, 2008
Merck took the unusual step of delaying its earnings release yesterday until after the close of the market. A report on another clinical study of Vytorin (ezetimibe), their drug with Schering-Plough, was coming out, so they put the numbers on hold until after the press release yesterday afternoon. Naturally, this led to a lot of speculation about what was going on. A conspiracy-minded website vastly unfriendly to Schering-Plough suspected some sort of elaborate ruse to drum up publicity.
But that sort of thinking doesn't take you very far, unless you count the distance you rack up going around in circles. As it turned out, the SEAS trial (Simvastatin and Ezetimibe in Aortic Stenosis) was, in fact, very bad publicity indeed for the drug and for both companies. In fact, a real conspiracy would have made sure that these numbers never saw the light of day, or were at least released at 6 PM on a Friday. But no, the spotlight was on them good and proper.
This trial studied patients with chronic aortic stenosis, which is a different condition than classic atherosclerosis. The two have enough similarities, though, that there has been much interest in whether statin treatment could be effective. The primary endpoint, a composite of aortic valve and general cardiovascular events, was missed. Vytorin was no better than placebo. It reached significance against one secondary endpoint, reducing the risk of various ischemic events, but not in any dramatic fashion.
That's not necessarily a surprise, since there's not a well-established therapy for aortic stenosis (thus the trial design versus placebo). As several commenters to the conference call after the press conference pointed out, this shouldn't change clinical practice much at all. But it's not what Merck and Schering-Plough needed to hear, that's for sure, because the sound bite will be "Vytorin Fails Again".
Actually, the sound bite will be even worse than that. There are a lot of headlines this morning about another observation from the SEAS trial: that significantly more patients in the treatment arm of the study were diagnosed with cancer. That's a red warning light, for sure, but in this case we have at least some data to decide how much of one.
For one thing, as far as I know there have been no reports of increased cancer among the patients taking Vytorin out in the marketplace - of course, one could argue that this might have been missed, but if the effect were as large as seen in the SEAS study, I don't think it would have been. Analyses of the earlier Vytorin trials and the ongoing IMPROVE-IT trial versus Zocor have also shown no cancer risk, and the latter trial is continuing. So for now, it would appear that either this was a nasty result by chance, or (a longer shot) that there's something different about the aortic stenosis patients that leads to major trouble with Vytorin.
None of these scientific and statistical arguments, and I mean none of them, will avail Schering-Plough and Merck. Among people who've heard of Vytorin at all, the first thing that will come to mind is "doesn't work", and after today's headlines, the second thing that will come to mind is "cancer". Just what you want, to put out press releases that your compound, even though it failed to work again, isn't actually a cancer risk. You really couldn't do worse; a gang of saboteurs couldn't have done worse. Of course, there's no such gang: the companies themselves authorized these trials, thinking that there were home runs to be hit. But all these sidelines - familial hypercholesteremia, aortic stenosis - have only sown fear, confusion, and doubt. The only thing that I can see rescuing Vytorin as a useful drug is for the IMPROVE-IT results to show really robust efficacy in its real-world patients. And I wonder if even that could be enough.
+ TrackBacks (0) | Category: Business and Markets | Cancer | Cardiovascular Disease | Clinical Trials | Toxicology
June 5, 2008
You may or may not have noticed, but slowly and quietly, Merck has been getting many of the large Vioxx judgments against it overturned on appeal. These cases made huge headlines when they were first tried, but the articles that tell the end of the story have not, for the most part, made the front page.
This is one reason that the company was finally able to settle a huge number of pending lawsuits for much less than many people thought likely. Merck seemed to like its chances, considering the cases they’d won and the way things looked in the appeals courts, and the amount of money they were able to settle for finally became a better deal for them than the alternative of fighting out every case. Of course, now people are starting to wonder if the company settled too soon - opinions differ.
It's important to note, though, that some of these reversals have been less than total victories for Merck. The first Texas case falls into that category, but the New Jersey punitive damages were thrown out based on the idea of pre-emption. A state jury, the appeals court ruled, can't decide if Merck defrauded the federal government when it got Vioxx approved. (We'll be revisiting that part of the argument when Wyeth v. Levine and Warner-Lamber v. Kent get decided).
But in the end, what looked for a while like an avalanche that might sweep the company away has come down to . . .what? Twenty cases went to juries, and Merck has now prevailed, to a large degree, in 17 of them, including all the largest awards. The Vioxx affair has still been a big financial hit, and it’s definitely had effects on Merck, but it hasn’t been quite the disaster it looked like being. Well, not financially - the company's reputation has taken a fearsome beating, and the drug industry as a whole hasn't come out of the business looking any better, either.
I can’t claim to have kept a cool head through the thing. There really was a period where the entire Vioxx affair could have taken a different turn – if Merck had lost a string of jury trials at the start, a settlement would have been much harder to arrange, and would have cost (naturally) a huge amount more. But fighting the first wave of cases to an expensive draw and appealing every verdict that went against them turned out to be the right strategy. Of course, any rational observer would have wished for a world where the whole business never would have taken place, but that's not where we find ourselves.
But, as you’ll have noticed, the preceding paragraphs are written from a point of view that’s pretty sympathetic to Merck. Zooming out to a more neutral view, what do we have? Vioxx certainly did some people a great deal of harm. The clinical data that led to its withdrawal make it extremely likely that some people experienced heart attacks, fatal in some cases, because they took the drug. Where the arguing starts is when you start pinning numbers to that last sentence. Vioxx’s bad effects, though real, were also small compared to the number of people who took it. (And the arguing continues when you try to balance its bad effects with the good that it did for the patients who really needed it, who were surely, though, a small subset of the people who actually were on the drug).
Those last two sentences point to some of the problem. If Merck had not tried to make Vioxx the pain drug for everyone in the world with any kind of inflammation pain, it’s quite possible that its cardiovascular effects would never have been noticed. And it’s worth remembering that they were noticed during a trial for a completely different indication, the possibility that COX-2 inhibitors might have a protective effect against colon cancer. Only after that trial flashed an unmistakable statistical warning did everyone go back to Merck’s earlier data and start arguing about what could or should have been noticed before.
The problem is that many other drugs have data that, in retrospect, look like trouble. It’s just that in many cases, the trouble never appears, either because it never rises to the level of being noticed, or it never was really there to begin with. There are drug candidates that cause bad effects in one out of every ten people who take them, and those never make it out of the clinic. (Most of the ones causing trouble at that level don’t even make it into the clinic in the first place). The ones that cause trouble at one in a hundred get weeded out, too, if that trouble is bad enough. The one in a thousand, one in ten thousand, one in a hundred thousand levels are where the difficulty is, because clinical trials have an increasingly difficult time picking up those problems. They’ll show up, if they do, after a drug comes to market.
But why stop there? There’s no reason not to believe that there are drugs that also cause direct harm, but only to one out of every million patients. Or ten million, or hundred million. Some unlikely combination of genetic and environmental factors comes up – we really don’t know enough to rule that sort of thing at all. We call those drugs “safe”, but “safe” means “causing harm at too low a level to see”. Every single drug in the world has bad side effects, from the bottom of the scale (hideous old last-ditch chemotherapy drugs that are one step away from World War One battlefield agents), all the way up to the top. It's just a question of how often they turn up.
+ TrackBacks (0) | Category: Cardiovascular Disease | Toxicology
May 22, 2008
Benjamin Cravatt at Scripps has another interesting paper out this week – by my standards, he hasn’t published very many dull ones. I spoke about some earlier work of his here, where his group tried to profile enzymes in living cells and found that the results they got were much different than the ones seen in their model systems.
This latest paper is in the same vein, but addresses some more general questions. One of his group members (Eranthi Weerapana, who certainly seems to have put in some lab time) started by synthesizing five simple test compounds. Each of them had a reactive group on them, and each molecule had an acetylene on the far end. The idea was to see what sorts of proteins combined with the reactive head group. After labeling, a click-type triazole reaction stuck a fluorescent tag on via the acetylene group, allowing the labeled proteins to be detected.
All this is similar to the previous paper I blogged about, but in this case they were interested in profiling these varying head groups: a benzenesulfonate, an alpha-chloroamide, a terminal enone, and two epoxides – one terminal on a linear chain, and the other a spiro off a cyclohexane. All these have the potential to react with various nucleophilic groups on a protein – cysteines, lysines, histidines, and so on. Which reactive groups would react with which sorts of protein residues, and on which parts of the proteins, was unknown.
There have been only a few general studies of this sort. The most closely related work is from Daniel Liebler at Vanderbilt, who's looking at this issue from a toxicology perspective ( try here , here, and here). And an earlier look at different reactive groups from the Sames lab at Columbia is here, but that was much less extensive.
Cravatt's study reacted these probes first with a soluble protein mix from mouse liver – containing who knows how many different proteins – and followed that up with similar experiments with protein brews from heart and kidney, along with the insoluble membrane fraction from the liver. A brutally efficient proteolysis/mass spectroscopy technique, described by Cravatt in 2005, was used to simultaneously identify the labeled proteins and the sites at which they reacted. This is clearly the sort of experiment that would have been unthinkable not that many years ago, and it still gives me a turn to see only Cravatt, Weerapana, and a third co-author (Gabriel Simon) on this one instead of some lab-coated army.
Hundreds of proteins were found to react, as you might expect from such simple coupling partners. But this wasn’t just a blunderbuss scatter; some very interesting patterns showed up. For one thing, the two epoxides hardly reacted with anything, which is quite interesting considering that functional group’s reputation. I don’t think I’ve ever met a toxicologist who wouldn’t reject an epoxide-containing drug candidate outright, but these groups are clearly not as red-hot as they’re billed. The epoxide compounds were so unreactive, in fact, that they didn’t even make the cut after the initial mouse liver experiment. (Since Cravatt’s group has already shown that more elaborate and tighter-binding spiro-epoxides can react with an active-site lysine, I’m willing to bet that they were surprised by this result, too).
The next trend to emerge was that the chloroamide and the enone, while they labeled all sorts of proteins, almost invariably did so on their cysteine (SH) residues. Again, I think if you took a survey of organic chemists or enzymologists, you’d have found cysteines at the top of the expected list, but plenty of other things would have been predicted to react as well. The selectivity is quite striking. What’s even more interesting, and as yet unexplained, is that over half the cysteine residues that were hit only reacted with one of the two reagents, not the other. (Leibler has seen similar effects in his work).
Meanwhile, the sulfonate went for several different sorts of amino acid residues – it liked glutamates especially, but also aspartate, cysteine, tyrosine, and some histidines. One of the things I found striking about these results is how few lysines got in on the act with any of the electrophiles. Cravatt's finely tuned epoxide/lysine interaction that I linked to above turns out, apparently, to be a rather rare bird. I’ve always had lysine in my mind as a potentially reactive group, but I can see that I’m going to have adjust my thinking.
Another trend that I found thought-provoking was that the labeled residues were disproportionately taken from the list of important ones, amino acids that are involved in the various active sites or in regulatory domains. The former may be intrinsically more reactive, in an environment that has been selected to increase their nucleophilicity. And as for the latter, I’d think that’s because they’re well exposed on the surfaces of the proteins, for one thing, although they may also be juiced up in reactivity compared to their run-of-the-mill counterparts.
Finally, there’s another result that reminded me of the model-system problems in Cravatt’s last paper. When they took these probes and reacted them with mixtures of amino acid derivatives in solution, the results were very different than what they saw in real protein samples. The chloroamide looked roughly the same, attacking mostly cysteines. But the sulfonate, for some reason, looked just like it, completely losing its real-world preference for carboxylate side chains. Meanwhile, the enone went after cysteine, lysine, and histidine in the model system, but largely ignored the last two in the real world. The reasons for these differences are, to say the least, unclear – but what’s clear, from this paper and the previous ones, is that there is (once again!) no substitute for the real world in chemical biology. (In fact, in that last paper, even cell lysates weren’t real enough. This one has a bit of whole-cell data, which looks similar to the lysate stuff this time, but I’d be interested to know if more experiments were done on living systems, and how close they were to the other data sets).
So there are a lot of lessons here - at least, if you really get into this chemical biology stuff, and I obviously do. But even if you don't, remember that last one: run the real system if you're doing anything complicated. And if you're in drug discovery, brother, you're doing something complicated.
+ TrackBacks (0) | Category: Biological News | Toxicology
April 29, 2008
So why is Merck's stock dropping - again?
The FDA just unexpectedly handed them a "not approvable" letter for their latest drug, Cordaptive. Actually, we should stop calling it that, since they also told the company that they're not going to approve that name, either. What Merck's going to do with all their promotional freebies now, I can't imagine.
What's Cordaptive, or whatever it's called, anyway?
That's Merck's newest cardiovascular drug - although the active ingredient isn't new. It's niacin, also known as vitamin B3. It's been known for many years that niacin can both lower LDL cholesterol and raise HDL, as well as lowering triglycerides - in fact, it's probably one of the only things that can do all of those significantly at the same time.
So this is a rip-off, then? Merck's trying to sell vitamin B for $20 a pill?
No, it actually isn't, at least not to the extent you're thinking. The problem with niacin as a cholesterol therapy is that you have to take whopping amounts of it to see an effect. And there's a side effect - flushing of the face, which is basically uncontrollable blushing that can last for hours in some cases. That may not sound like much, but the great majority of people who take niacin at these levels have a problem with it, and a lot of people discontinue the therapy rather than put up with it. If the drug is taken for a few weeks, the flushing reportedly eases off some, but not everyone makes it to that point. By all reports, it's very irritating - and since patients can't feel their cholesterol being high, but can feel their faces burning and turning red, they solve the problem by not taking the niacin.
So why doesn't Cordaptive do the same thing?
A lot of people have tried to find a way to keep the lipid effects of niacin and get rid of the flushing. Merck added a prostaglandin receptor antagonist, laropiprant, to try to block the pathway that leads to the vascular effects. And it seems to help quite a bit, which made the combination a potential winner. Abbott already has Niaspan, a slow-release version of niacin, which also has reduced flushing problems and does about $600 million of sales a year. Niacin therapy itself seems to be pretty safe, although you do want to make sure that liver and kidney function are normal before you start, so the only big question has been what blocking that DP1 receptor might do on the side: can you take that pathway out without causing more trouble?
Well, can you?
Apparently not. Actually, that should be "apparently there isn't enough evidence to say yet" - that's probably more in the spirit of the FDA's letter. They want to see more information about the drug. Problem is, the FDA treats this (properly) as a matter between the agency and the drug company, so they aren't saying what the problem is. And Merck, for its part, isn't saying, either. Investors feel rather left out in these situations - perhaps the most striking one in recent years was Sanofi-Aventis's absolute wall of silence for months about why the FDA wasn't approving their potential blockbuster Acomplia (rimonabant).
Why's this so unexpected, if there wasn't enough evidence given to the FDA?
Well, there seems to have been enough evidence in the same pile of data for the European Union, whose regulators
approved recommended the drug for approval a few days ago. Merck must have felt reasonably confident that they'd get the same treatment here. No such luck. And as just mentioned, we don't know if the problem is not enough evidence of efficacy, not enough evidence of safety, or a bit of each.
Why don't you people just make cholesterol-lowering drugs that work better, then, so there's no doubt about efficacy?
Would that we could. Statins basically only lower LDL - they don't raise your HDL. And if you push the statins too hard, patients start coming down with rhabdomyolysis, and you don't want that - ask Bayer. Raising HDL has proven to be a real challenge, too. There are a lot of ideas about how to do it, but the most obvious ones aren't working out too well - ask Pfizer.
OK, then, why don't you just make safer versions of what you already have?
Would that we could. But in almost every case, we have no idea of how to do that. For the most part, either the safety concerns are tied up with the beneficial mechanism of the drug, or they're occurring through side pathways that we don't understand well and don't know how to avoid. And some of those are things that you don't even get a read on until your drug gets out into the market, which is no way to do things, either.
So, why is the drug business considered such a safe bet?
Now, that one I don't have an answer for. Unless it's the conviction that people are always going to get sick, which I guess is a pretty safe bet. And that's coupled with a conviction, apparently, that we're always going to be able to do something profitable about that. And some days, I have to wonder. . .
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Development | Toxicology
April 10, 2008
As mentioned yesterday, I would have to say that Mannkind is in big trouble. I’d never heard of the company until the Wonder Drug Factory was closing back in Connecticut, but Mannkind was moving some of their operations into the state around then and interviewed a number of my former colleagues.
The whole inhaled-insulin idea had already taken some pretty severe blows. The massive failure of Exubera was the biggest, although a creative person could always argue that a better product with a more convenient delivery system could succeed in its place. But then Novo Nordisk and Eli Lilly (serious diabetes players, both of them) got out of the area before they’d even launched, deciding that it was better to write off their whole investment than to try to bring it to market. That didn’t help, which is one reason that Mannkind stock was down in the single digits, despite the company's efforts.
Well, as of yesterday it’s down in the really low single digits. And I honestly can’t see how they’re going to revive their flagship program if the Pfizer lung cancer data are real. The FDA is going to be very, very cautious about allowing any sort of inhaled insulin trials to proceed. I’d think that you’d have to show that your product is different from Exubera in its carcinogenic risk just to get one off the ground, and frankly, I have no idea how you’d do that. Anything that could will take years to develop and validate.
This latest result also shows some of the real difficulties and risks of drug development. After all, Pfizer and Nektar spent a very long time developing Exubera. The product was delayed and delayed while more and more clinical work was done. But in a slow-starting condition like lung cancer, those years may still not enough to quite pick things up by the time a product makes it to market. Think of what might have happened if Exubera had been a success. . .
And that brings us back to the regulatory pre-emption topic of the other day. This illustrates why either extreme of that argument is untenable. On the make-‘em-pay side, you have trial lawyers arguing that if companies just wouldn’t put defective products on the market, well, they wouldn’t have anything to worry about, would they? Test your drugs correctly and things will be fine! But Exubera’s pre-approval life was as long and detailed as could be. The testing went on and on – and after all, insulin itself has been on the market for more than half a century. What more would a company need to say something is safe?
Then there’s the other side – total pre-emption, which says that the FDA is there to regulate and sign off on safety and efficacy, and by gosh we should have them do it. Once this mighty agency gives its stamp of approval, that settles it. But again, the FDA put Exubera through all kinds of paces. If every drug took that long and cost that much to develop, we’d be in even worse shape than we are now, believe me. So what’s the agency to do?
The truth, as far as I can see, is that no one can guarantee the safety of a new drug. If you want to take that further, guaranteeing the safety of an existing drug isn’t possible, either. Every known drug is capable of causing trouble at some dose, and every known drug is capable of causing trouble at its normal dose in some people. Every new drug has the possibility of doing things no one ever anticipated, once it gets into enough patients for enough time. Every single one.
Complete safety doesn’t exist, and never has. You can have more safety, if you’re willing to take enough time and spend enough money. But you can take all the time we have on earth, and spend all the money available, and you still won’t be able to promise that nothing bad will ever happen. Pretending that either the drug companies or the regulatory agencies can make that fact go away is a position for fools and demagogues.
+ TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Toxicology
April 9, 2008
I don't usually do more than one post a day, but this really caught my eye. In an ongoing review of Pfizer's (now discontinued) inhaled insulin (Exubera), an increased chance of lung cancer has turned up among participants in the clinical trials. Six of the over four thousand patients in the trials on Exubera have since developed the disease, versus one of the similarly-sized control group. Six isn't many, but with that large a sample size, it's something that statistically can't be ignored, either.
The concerns would have to be, naturally, that this number could increase, since damage to lung tissue might take a while to show up. This, needless to say, completely ends Nektar's attempts to find another partner for Exubera. Their stock is getting severely treated today (down 25% as I write), but things are even worse for another small company, Mannkind, that's been working on their own inhaled insulin for years now (down 58% at the moment).
There's no guarantee that another inhaled form would cause the same problems, but there's certainly no guarantee that it wouldn't, either. Whether this is an Exubera-specific problem, an insulin-specific one, or something that all attempts at inhaled proteins will have to look out for is just unknown. And unknown, in this case, is bad. It's going to be hard to make the case to find out, if this is the sort of potential problem waiting for your new product. Inhaled therapeutics of all sorts have taken a huge setback today.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Diabetes and Obesity | Toxicology
April 7, 2008
There's talk again about an idea that's been kicking around for some years: are drug companies shielded from liability after the FDA has approved their drugs for sale?
Obviously, the current answer is "Not at all": consider the lawsuits over Vioxx. But the decision by the Supreme Court in February in Riegel v. Medtronic has the idea being taken seriously again. That ruling seems to shield medical device companies from lawsuits over safety or efficacy after the FDA has signed off on those issues - as long as the device is the same, and used in the approved manner. And no, for the politically motivated among the readership, this wasn't some barely-realized 5:4 scheme from Justice Scalia; the decision went 8 to 1.
There's a roughly similar case before the court now, Wyeth v. Levine. At issue is the labeling and usage of Wyeth's histamine antagonist Phenergan (promethazine), with the suit being brought by a patient who was injured after the drug was used in a method warned against on the label. This one hinges on a federal/state dispute, though, as the petition for certiorari (PDF) makes clear:
"Whether the prescription drug labeling judgments imposed on manufacturers by the Food and Drug Administration pursuant to the FDA's comprehensive safety and efficacy authority. . .preempt state law product liability claims premised on the theory that different labeling judgments were necessary to make drugs reasonably safe to use".
This seems, if it goes Wyeth's way, as if it would keep various state jurisdictions from coming in with different liability claims, but the situation seems less stark to me if a state's standards were the same as the federal government's. Would this really pre-empt liability suits entirely? I'll let actual lawyers set me straight on that if I'm looking at it incorrectly.
There's another case that was granted cert. last fall, Warner-Lambert v. Kent, which could also have a bearing on the whole issue. This hinges on the approval (and later withdrawal) of the PPAR drug Rezulin (troglitazone), and whether Michigan state law on pre-emption of lawsuits is in conflict with the federal law. Again, I would have thought this one would probably be decided as a state-versus-federal issue, without extending to any sweeping thoughts on pre-emption in general. But that Medtronic decision makes a person wonder if the Court is in the mood for just that.
So, there's the background. Arguing will now commence on whether pre-emption is a good idea or not. I've thought for some time that all approved medications should be labeled as "investigational new drugs", and that everyone taking them agrees that they are participating in a post-approval clinical study of their safety and efficacy. (I suppose that's my own form of pre-emption). But there's room to argue if the FDA is ready to take on the full responsibility of drug approval, without the option of later redress in the courts if something goes wrong. (Counterargument: that's what they're supposed to be doing now. . .) And all of these schemes have to make room for new information turning up, or for outright fraud (which is most definitely in the eye of the beholder). Personally, I'm glad not to be a judge.
+ TrackBacks (0) | Category: Drug Development | Press Coverage | Toxicology
March 13, 2008
Today (March 13) at 3 PM EST, there's a hearing scheduled on a legal motion that could change the way scientific results are published in this country. Pfizer is being sued over injuries that plaintiffs believe came from their use of Celebrex, one of the world’s only remaining Cox-2 inhibitor drugs. (I saw a Celebrex tv ad the other day, a surreal thing which was basically a lengthy recitation of FDA-mandated side effect language accompanied by jazzy graphics). Everyone with a Cox-2 compound is being sued from every direction, as a matter of course. The company is, naturally, casting around for any weapon that comes to hand for its defense, as did Merck when that same sky began to come down on them.
But Pfizer’s lawyers (DLA Piper LLP of Boston) are apparently (your choice, multiple answers permitted) more aggressive, more unscrupulous, or more clueless than Merck’s. Among the points at issue are several papers from the New England Journal of Medicine. According to the motion, which I paid to download from PACER, two of the particularly contentious ones are this one on complications after cardiac surgery and this one on cardiac risk during a colon cancer trial. So Pfizer has served the journal’s editors with a series of subpoenas. They’re seeking to open the files on these manuscripts – reviewer comments, reviewer names, editorial correspondence, rejected submissions, the lot. What are they hoping to find? Oh, who knows – whatever’s there: ”Scientific journals such as NEJM may have received manuscripts that contain exonerating data for Celebrex and Bextra which would be relevant for Pfizer's causation defense” say the lawyers. The journal refused to comply, so Pfizer has now filed a motion in district court in Massachusetts to compel them to open up.
What's particularly interesting is the the journal has, to some extent, already done so. According to Pfizer's "Motion to Compel", the editors "produced a sampling of forms identifying the names of manuscript authors and their financial disclosures, correspondence between NEJM editors and authors regarding suggested editorial changes and acceptance and rejection letters". The motion goes on to say, though, that the editors had the nerve to ignore the broader fishing expedition, only releasing documents for authors specifically named in the subpoenas, not "any and all" documents related to Celebrex or Bextra. They also withheld several documents under the umbrella of peer review and internal editoral processes. Thus, the request to open up the whole thing.
I’ve never heard of this maneuver before. Staff members of the NEJM gave depositions in the early phases of the Merck litigation, since the journal was in the middle of the Vioxx fighting. (They’d “expressed concern” several times about the studies that had appeared in their own pages and passed through their own review process). But even then, I don’t think that Merck wanted to open up the editorial files, and you’d think that if anyone had something to gain by it, they would.
Pfizer’s motion seems to me more like a SLAPP, combined with standard fishing expedition tactics. Their legal team doesn’t seem to think that any of this will be a problem, at least as far as you can tell from their public statements. They say in their motion that they don’t see any harm coming to the NEJM if they comply – heavens, why not? Reviewers will just line up to look over clinical trial publications if they think that their confidentiality can be breached in case of a lawsuit, won’t they? And the rest of the scientific publishing world could look for the same treatment, any time someone published data that might be relevant to someone’s court case, somewhere. Oh, joy.
Pfizer’s motion states that ” The public has no interest in protecting the editorial process of a scientific journal”. Now, it’s not like the peer review process is a sacred trust, but it’s the best we’ve been able to come up with so far. It reminds me of Churchill’s comment about democracy being the worst form of government until you look at the alternatives. I realize that it’s the place of trial lawyers and defense teams to scuffle around beating each other with whatever they can pick up, but I really don’t think that they should be allowed to break this particular piece of furniture.
And I can’t see how the current review process won’t get broken if Pfizer’s motion is granted. The whole issue is whether the journal's editors can claim privilege - if so, they don't have to release, and if not, they most certainly do. This can't help but set a precedent, one way or another. If there's no privilege involved in the editorial process, a lot of qualified and competent reviewers will start turning down any manuscript that might someday be involved in legal action. (Which, in the medical field, might be most of them). The public actually does have an interest in seeing that there is a feasible editorial process for scientific journals in general, and I hope that the judge rules accordingly.
In the meantime, for all my friends at Pfizer and for all the other scientists there with integrity and good sense: my condolences. Your company isn’t doing you any favors this week.
(One of the first mentions of all this was on the Wall Street Journal’s Health Blog. The comments that attach to it are quite interesting, dividing between the hands-off-peer-review crowd and a bunch of people who want to see the NEJM taken down a few pegs. I can sympathize with that impulse, but there has to be a better way to do it than this. And there’s more commentary from Donald Kennedy, editor of Science, here (you can pretty much guess what he thinks about this great idea).
+ TrackBacks (0) | Category: Cardiovascular Disease | The Scientific Literature | Toxicology | Why Everyone Loves Us
March 12, 2008
Well, I wish I hadn’t been right about this one. Last month I spent some time expressing doubts about Merck’s new obesity drug candidate taranabant, a cannabinoid-1 ligand similar to Sanofi-Aventis’s failed Acomplia (rimonabant). S-A ran into a number of central nervous system side effects in the clinic, and although they’ve gotten the drug approved in a few markets, it’s not selling well. US approval, now long delayed, looks extremely unlikely.
I couldn’t see why Merck wouldn’t run into the same sort of trouble. If a report from a Wall St. analyst (Aileen Salares of Leerink Swann) is correct, they have. Merck’s presenting on the compound at the next American College of Cardiology meeting (at the end of this month in Chicago), and information from the talk has apparently leaked out in violation of the ACC's embargo. There appears to be some difficulty both on the efficacy and side effect fronts – bad news all around.
The company was aiming for a 5% weight loss, but only reached that at the highest dose (4 mg). The report is that CNS side effects were prominent at this level, twice the rate of the placebo group. The next lower dose, 2 mg, missed the efficacy endpoint and still seems to have shown CNS effects. According to Salares, nearly twice the number of patients in the drug treatment group dropped out of the trial as compared to placebo, citing neurological effects which included thoughts of suicide.
While there’s no confirmation from Merck on these figures, they’re disturbingly plausible, because that’s just the profile that got rimonabant into trouble. If this holds up, I think we can say that CB-1 ligands as a CNS therapeutic class are dead, at least until we understand a lot more about their role in the brain. Two drugs with different structures and different pharmacological profiles have now run into the same suite of unacceptable side effects, and the main thing they have in common is CB-1 receptor occupancy. There’s always the possibility that a CB-1 antagonist (or inverse agonist) might have a use out in the periphery – they could have immunomodulatory effects – but anyone who tries this out would be well advised to do it with a compound that doesn’t cross the blood-brain barrier.
And as for taranabant, if the data are as reported I don’t see how Merck can get this compound through the FDA. Even if they did, by some weird accident, I don’t see why they’d pull the pin on such a potential liability grenade. Can you imagine what the labeling would have to look like in order to try (in vain, most likely) to insulate the company from lawsuits? That makes a person wonder how on earth the company could have been talking about submitting it for approval later this year, which is what they were doing just recently. They must have had these numbers when they made that statement – wouldn’t you think? And they must have immediately realized that this would be trouble – you’d think. If that Leerink Swan report is correct, the company’s recent statements are just bizarre.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | The Central Nervous System | Toxicology
March 4, 2008
Here's a snapshot for you, to illustrate how little we know about what many of our compounds can do. I was browsing the latest issue of the British Journal of Pharmacology, which is one of many perfectly respectable journals in that field, and was struck by the table of contents.
Here, for example, is a paper on Celebrex (celecoxib), but not about its role in pain or inflammation. No, this one, from a group in Turin, is studying the drug's effects on a colon cancer cell line, and finding that it affects the ability of the cells to stick to surfaces. This appears to be driven by downregulation of adhesion proteins such as ICAM-1 and VCAM-1, and that seems to have nothing particular to do with COX-2 inhibition, which is, of course, the whole reason that Celebrex exists.
This is a story that's been going on for a few years now. There's been quite a bit of study on the use of COX-2 drugs in cancer (particularly colon cancer), but that was driven by their actual COX-2 effects. Now it's to the point that people are looking at close analogs of the drugs that don't have any COX-2 effects at all, but still seem to have promise in oncology. You never know.
Moving down the list of papers, there's this one, which studies a well-known model of diabetes in rats. Cardiovascular complications are among the worst features of chronic diabetes, so these folks are looking at the effect of vascular relaxing compounds to see if they might provide some therapeutic effect. And they found that giving these diabetic rats sildenafil, better known as Viagra, seems to have helped quite a bit. They suggest that smaller chronic doses might well be beneficial in human patients, which is definitely not something that the drug was targeted for, but could actually work.
And further down, here's another paper looking at a known drug. In this case, it's another piece of the puzzle about the effects of Acomplia (rimonabant), Sanofi-Aventis's one-time wonder drug candidate for obesity. It's become clear that it (and perhaps all CB-1 compounds) may also have effects on inflammation and the immune system, and these researchers confirm that with one subtype of blood cells. It appears that rimonabant is also a novel immune modulator, which is most definitely not one of the things it was envisioned as. Do the other CB-1 compounds (such as Merck's taranabant) have such effects? No one knows, but it wouldn't come as a complete surprise, would it?
These are not unusual examples. They just serve to show how little we understand about human physiology, and how important it is to study drugs in whole living systems. You might never learn about such things by studying the biochemical pathways in isolation, as valuable as that is in other contexts. But our context in the drug industry is the real world, with real human patients, and they're going to be surprising us for a long time to come. Good surprises, and bad ones, too.
+ TrackBacks (0) | Category: Cardiovascular Disease | Diabetes and Obesity | Drug Development | Toxicology
February 8, 2008
There’s an excellent article in Nature Reviews Drug Discovery that summarizes the state of the HDL-raising drug world. It will also serve as an illustration, which can be repeated across therapeutic areas, of What We Don’t Know, and How Much We Don’t Know It.
The last big event in this drug space was the catastrophic failure of Pfizer’s torcetrapib, which wiped out deep into Phase III, taking a number of test patients and an ungodly amount of money with it. Ever since then, people have been frantically trying to figure out how this could have happened, and whether it means that the other drug candidates in this area are similarly doomed. There’s always the chance that this was a compound-specific effect, but we won’t know until we see the clinical results from those others. Until that day, if you want to know about HDL therapies, read this review.
I’d guess that if you asked a thousand random people about that Pfizer drug, most wouldn’t have heard about it, the same as with most other scientific news. But many that had might well have thought it was a cholesterol-lowering drug. Cholesterol = bad; if there’s one thing that the medical establishment has managed to get into everyone’s head, that’s it. The next layer of complexity (two kinds of cholesterol, one good, one bad) has penetrated pretty well, but not as thoroughly. A small handful of our random sample might have known, though, that torcetrapib was designed to raise HDL (“good cholesterol”).
And that’s about where knowledge of this field stops among the general population, and I can understand why, because it gets pretty ferocious after that point. As with everything else in living systems, the closer you look, the more you see. There are, for starters, several subforms of HDL, the main alpha fraction and at least three others. And there are at least four types of alpha. At least sixteen lipoproteins, enzymes, and other proteins are distributed in various ratios among all of them. We know enough to say that these different HDL particles vary in size, shape, cholesterol content, origin, distribution, and function, but we don’t know anywhere near as much as we need to about the details. There’s some evidence that instead of raising HDL across the board, what you want to do is raise alpha-1 while lowering alpha-2 and alpha-3, but we don’t really know how to do that.
How does HDL, or its beneficial fraction(s) help against atherosclerosis? We’re not completely sure about that, either. One of the main mechanisms is probably reverse cholesterol transport (RCT), the process of actually removing cholesterol from the arterial plaques and sending it to the liver for disposal. It’s a compelling story, currently thought to consist of eight separate steps involving four organ systems and at least six different enzymes. The benefits (or risks) of picking one of those versus the others for intervention are unknown. For most of those steps, we don’t have anything that can selectively affect them yet anyway, so it’s going to take a while to unravel things. Torcetrapib and the other CETP inhibitors represent a very large (and very risky) bet on what is approximately step four.
And HDL does more than reverse cholesterol transport. It also prevents platelets from aggregating and monocytes from adhering to artery walls, and it has anti-inflammatory, anti-thrombotic, and anti-oxidant effects. The stepwise mechanisms for these are not well understood, their details versus all those HDL subtypes are only beginning to be worked out, and their relative importance in HDL’s beneficial effects are unknown.
At this point, the review article begins a section titled “Further Complications”. I’ll spare you the details, but just point out that these involve the different HDL profiles (and potentially different effects) of people with diabetes, high blood pressure, and existing cardiovascular disease. If you’re thinking “But that’s exactly the patient population most in medical need”, you are correct. And if it’s occurred to you that this could mean that an HDL drug candidate’s safety profile might be even more uncertain than usual, since you won’t see these mechanisms kick in until you get deep into the clinical trials, right again. (And if you thought of that and you don’t already work in the industry, please consider coming on down and helping us out).
Much of the rest of the article is a discussion of what might have gone wrong with torcetrapib, and suffice it to say that there are many possibilities. The phrases “conflicting findings”, “remain to be elucidated”, “would be important to understand” and “will require careful analysis” feature prominently, as they damn well should. As I said at the time, we’re going to learn a lot about human lipidology from its failure, but it sure is a very painful way to learn it.
And that is the state of the art. This is exactly what the cutting edge of medical knowledge and drug discovery looks like, except for the fact that cardiovascular disease is relative well worked out compared to some of the other therapeutic areas. (Try central nervous system diseases if you want to see some real black boxes). This is what we’re up against. And if anyone wants to know how come we don’t have a good therapy yet for Disease A or Syndrome B. . .well, this is why.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Toxicology
January 29, 2008
I've had some questions about animal models and testing, so I thought I'd go over the general picture. As far as I can tell, my experience has been pretty representative.
There are plenty of animal models used in my line of work, but some of them you see more than others. Mice and rats are, of course, the front line. I’ve always been glad to have a reliable mouse model, personally, because that means the smallest amount of compound is used to get an in vivo readout. Rats burn up more hard-won material. That's not just because they're uglier, since we don’t dose based on per cent ugly, but rather because they're much larger and heavier. The worst were some elderly rodents I came across years ago that were being groomed for a possible Alzheimer’s assay – you don’t see many old rats in the normal course of things, but I can tell you that they do not age gracefully. They were big, they were mean, and they were, well, as ratty as an animal can get. (They were useless for Alzheimer's, too, which must have been their final revenge).
You can’t get away from the rats, though, because they’re the usual species for toxicity testing. So if your pharmacokinetics are bad in the rat, you’re looking at trouble later on – the whole point of tox screens is to run the compound at much higher than usual blood levels, which in the worst cases you may not be able to reach. Every toxicologist I’ve known has groaned, though, when asked if there isn’t some other species that can be used – just this time! – for tox evaluation. They’d much rather not do that, since they have such a baseline of data for the rat, and I can’t blame them. Toxicology is an inexact enough science already.
It’s been a while since I’ve personally seen the rodents at all, though, not that I miss them. The trend over the years has been for animal facilities to become more and more separated from the other parts of a research site – separate electronic access, etc. That’s partly for security, because of people like this, and partly because the fewer disturbances among the critters, the better the data. One bozo flipping on the wrong set of lights at the wrong time can ruin a huge amount of effort. The people authorized to work in the animal labs have enough on their hands keeping order – I recall a run of assay data that had an asterisk put next to it when it was realized that a male mouse had somehow been introduced into an all-female area. This proved disruptive, as you’d imagine, although he seemed to weather it OK.
Beyond the mouse and rat, things branch out. That’s often where the mechanistic models stop, though – there aren’t as many disease models in the larger animals, although I know that some cardiovascular disease studies are (or have been) run in pigs, the smallest pigs that could be found. And I was once in on an osteoporosis compound that went into macaque monkeys for efficacy. More commonly, the larger animals are used for pharmacokinetics: blood levels, distribution, half-life, etc. The next step for most compounds after the rat is blood levels in dogs – that’s if there’s a next step at all, because the huge majority of compounds don’t get anywhere near a dog.
That’s a big step in terms of the seriousness of the model, because we don’t use dogs lightly. If you’re getting dog PK, you have a compound that you’re seriously considering could be a drug. Similarly, when a compound is finally picked to go on toward human trials, it first goes through a more thorough rat tox screen (several weeks), then goes into two-week dog tox, which is probably the most severe test most drug candidates face. The old (and cold-hearted) saying is that “drugs kill dogs and dogs kill drugs”. I’ve only rarely seen the former happen (twice, I think, in 19 years), but I’ve seen the second half of that saying come true over and over. Dogs are quite sensitive – their cardiovascular systems, especially – and if you have trouble there, you’re very likely done. There’s always monkey data – but monkey blood levels are precious, and a monkey tox screen is extremely rare these days. I’ve never seen one, at any rate. And if you have trouble in the dog, how do you justify going into monkeys at all? No, if you get through dog tox, you're probably going into man, and if you don't, you almost certainly aren't.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development | Pharmacokinetics | Toxicology
December 5, 2007
How many hits can a drug – or a whole class of drugs – take? Avandia (rosiglitazone) has been the subject of much wrangling about cardiovascular risk in its patient population of Type II diabetics. But there have also been scattered reports of increases in fractures among people taking it or Actos (pioglitazone), the other drug with the same mechanism of action.
Now Ron Evans and his co-workers at Salk, who know about as much PPAR-gamma biology as there is to know, have completed a difficult series of experiments that provides some worrying data about what might be going on. Studying PPAR-gamma’s function in mice is tricky, since you can’t just step in and knock it out (that’s embryonic lethal), and its function varies depending on the tissue where it’s expressed. (That latter effect is seen across many other nuclear receptors, which is just one of the things that make their biology so nightmarishly complex).
So tissue-specific knockouts are the way to go, but the bones are an interesting organ. The body is constantly laying down new bone tissue and reabsorbing the old. Evans and his team managed to knock out the system in osteoclasts (the bone-destroying cells), but not osteoblasts (the bone-forming ones). It’s been known for years that PPAR-gamma has effects on the development of the latter cells, which makes sense, because it also affects adipocytes (fat cells), and those two come from the same lineage. But no one’s been able to get a handle on what it does in osteoclasts, until now.
It turns out that without PPAR-gamma, the bones of the mice turned out larger and much more dense than in wild-type mice. (That’s called osteopetrosis, a word that you don’t hear very much compared to its opposite). Examining the tissue confirmed that there seemed to be normal numbers of osteoblasts, but far fewer osteoclasts to reabsorb the bone that was being produced. Does PPAR stimulation do the opposite? Unfortunately, yes – there had already been concern about possible effects on bone formation because of the known effects on osteoblasts, but it turned out that dosing rosiglitazone in mice actually stimulates their osteoclasts. This double mode of action, which was unexpected, speeds up the destruction of bone and at the same time slow down its formation. Not a good combination.
So there’s a real possibility that long-term PPAR-gamma agonist use might lead to osteoporosis in humans. If this is confirmed by studies of human osteoclast activity, that may be it for the glitazones. They seem to have real benefit in the treatment of diabetes, but not with these consequences. Suspicion of cardiovascular trouble, evidence of osteoporosis – diabetic patients have enough problems already.
As I’ve mentioned here before, I think that PPAR biology is a clear example of something that has turned out to be (thus far) too complex for us to deal with. (Want a taste? Try this on for size, and let me assure that this is a painfully oversimplified diagram). We don’t understand enough of the biology to know what to target, how to target it, and what else might happen when we do. And we've just proven that again. I spent several years working in this field, and I have to say, I feel safer watching it from a distance.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Toxicology
September 27, 2007
Yet another study has shown no link between the former vaccine additive thimerosal and neurological problems in children. This one evaluated over a thousand seven-to-ten year olds for a long list of outcomes, and came up negative. No strong correlations were found, and the weak ones seemed to spread out evenly among positive and negative consequences.
This is just the kind of data that researchers are used to seeing. Most experiments don't work, and most attempts to find correlations come up empty. The leftovers are a pile of weak, unconvincing traces, all pointing in different directions while not reaching statistical significance. For a study like this one, though, this is a good answer. The question is "Does thimerosal exposure show any connection to any of these forty-two neurological symptoms?", and the answer is "No. Not as far as we can see, and we looked very hard indeed."
And this isn't the first study to find the same sorts of results. The fact that reports of autism do not appear to decrease after thimerosal is removed from circulation should be enough on the face of it, but there's the problem. To the committed believers, those data are flawed. And these latest data are flawed. All the data that do not confirm that thimerosal is a cause of autism are flawed. Now, if this latest study had shown the ghost of statistical significance, well, that would no doubt be different. But it didn't, and that means that there's something wrong with it.
The director of SafeMinds, a group of true thimerosal believers if ever there was, actually was on the consulting board of this latest study. But she withdrew her name from the final document. The CDC is conducting a large thimerosal-and-autism study whose results should come out next year. Here's a prediction for you: if that one fails to show a connection, and I have every expectation that it'll fail to show one, SafeMinds will not accept the results. Anyone care to bet against that?
As a scientist, I've had to take a lot of good, compelling ideas of mine and toss them into the trash when the data failed to support them. Not everything works, and not everything that looks as if it makes sense really does. It's getting to the point with the autism/thimerosal hypothesis- has, in fact, gotten to the point quite some time ago - that the data have failed to support it. If you disagree, and I know from my e-mail that some readers will, then ask yourself what data would suffice to make you abandon your belief? If you can't think of any, you have moved beyond medicine and beyond science, and I'll not follow you.
+ TrackBacks (1) | Category: Autism | The Central Nervous System | Toxicology
July 30, 2007
I have to say, I think the FDA vote on Avandia (rosiglitazone) was well done. As those following the story know, David Graham at the agency had been pushing to have the drug removed from the market, but a panel just voted 22 to 1 to keep it, albeit with warning labels and stricter standards for use.
That's as it should be. As mentioned here before, we really don't have hard enough data on the compound's risks yet and whether or not they outweight its benefits. I think that this decision is a grown-up one: to say that yes, rosiglitazone appears as if it may have some cardiovascular risks, but since we're not sure about that, for now it appears that they're risks worth taking. It'll be up to patients and their physicians to make the call.
Would that it always worked this way! Drugs have side effects, how ever much we might wish that they didn't. Some are bad enough to wipe out any benefit, and some aren't. It's a judgement call every time, and I'm glad that the panel exercised some instead of going reflexively with the "no risk to anyone for any reason, ever" mindset.
Avandia may well be taken off the market when more data come in. And if that happens, then we'll know that it was the right decision. But we don't know that yet. How, I wonder, will this vote affect the lawsuits that are already being prepared? Can you sue for being (allegedly) harmed by a drug the FDA just re-reviewed and decided to keep?
+ TrackBacks (0) | Category: Toxicology
June 15, 2007
Everyone will have heard the news about Wednesday's FDA Advisory Commitee vote on Accomplia / Zimulti (rimonabant). If you'd tried to convince folks a few years ago that this drug wouldn't make it to a vote until summer of 2007, and would be unanimously rejected when it did, you'd have been looked at with pity and concern. No, this drug was going to conquer the world, and now people are talking merger-of-desperation.
Hey, you don't even have to go back a few years. Here's an article from 2006:
"A new anti-obesity pill that market observers say could become the world's biggest-selling drug is close to getting approval from the European Commission. . .
Gbola Amusa, an analyst with research firm Sanford C Bernstein, said that Acomplia could achieve $4.1bn in annual sales by 2010, in part because it has been shown in clinical trials not only to trim fat but to increase levels of good cholesterol and control diabetes.
"In the blue sky scenario, this could become the world's best- selling drug as the indication is so broad," he said. "It has a path to revenues that we rarely ever see from a pharma product."
Oh, the blue sky scenario. I'm no stranger to it myself - I love the blue sky scenario. But how often does it ever descend to earth? It's not going to do it this time. Sanofi-Aventis was reduced to making the suggestion that every potential patient be first screened for depression, which doesn't sound like the sort of iron wrecking ball that usually gets welded to the world's best-selling drugs.
In the wake of this development disaster, here are a few points that may not get the attention they deserve: first, consider the money that S-A has spent on this drug. We're never going to be shown an accurate accounting; no one outside the upper reaches of the company will ever see that. But I seriously doubt if they've ever spent more on any program. There's an excellent chance that most of it will never be recovered, not by rimonabant - it'll have to be recovered by whatever drugs the company can come up with in the future. They'll be priced accordingly.
Second, think about the position of their competitors. All sorts of companies have pursued this wonder blockbuster opportunity. If you run CB-1 antagonists through the databases, all kinds of stuff comes hosing out. Merck and Pfizer are the companies that were most advanced - you don't get much more advanced than Phase III clinical trials - but plenty of others spent time and money on the chase. All of those prospects have taken grievous damage. Odds are that rimonbant's problems are mechanism-related, and proving otherwise will be an expensive job. This is something to consider when you next hear about all those easy, cheap me-too drugs.
And finally, it's worth thinking about what this says about our abilities to prosecute drug development in general. Just as in the case of Pfizer's torcetrapib, we have here a huge, expensive, widely anticipated drug that comes down out of the sky because of something we didn't know about. It's going to happen again, too. Never think it won't. This is a risky, white-knuckle business, and it's going to be that way for a long time to come.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | The Central Nervous System | Toxicology
June 11, 2007
The FDA briefing documents for Wednesday's discussion of Accomplia / Zimulti (rimonabant) have been posted, and they're an interesting read indeed. As everyone in the industry knows, this drug was once looked on as the next potential record-breaker, and writing the first part of this sentence in that verb form tells you a lot about what's happened since. It's the first antagonist targeting the cannabinoid CB-1 receptor, and at one point it looked like it was going to make people lose their excess weight, shed their addictions, and for all I know refinance their mortgages.
But then the delays hit in the US - long, long ones, delays which made fools of everyone who tried to predict when they would be over. And the drug meanwhile made it to market in Europe, where it has very quietly done not very much.
Now we may be seeing some of the reasons for the FDA'a "approvable" letter over a year ago. It's not efficacy - the FDA's briefing summary states that:
"Rimonabant 20 mg daily vs. placebo was associated with statistically and clinically
significant weight loss. Rimonabant 5 mg daily vs. placebo was associated with
statistically significant but clinically insignificant weight loss. . .rimonabant 20 mg daily vs. placebo was associated with a statistically significant 8% increase in HDL-C and a statistically significant 12% decrease in TG levels. There were no significant improvements in levels of total or LDL-C in the rimonabant 20 mg daily vs. placebo group. . .rimonabant 20 mg compared with placebo was associated with a statistically significant 0.7% reduction in HbA1c in overweight and obese subjects with type 2 diabetes taking either metformin or a sulfonylurea."
Not bad - just the sort of thing you'd want to go after the whole obesity/diabetes/cardiovascular area, you'd think. But the problem is in the side effects, and one in particular:
"The incidence of suicidality – specifically suicidal ideation – was higher for 20 mg
rimonabant compared to placebo. Similarly, the incidence of psychiatric adverse events,
neurological adverse events and seizures were consistently higher for 20 mg rimonabant compared to placebo. . ."
They're also concerned about other neurological side effects, and seizures as well. The seizure data don't look nearly as worrisome, except in the obese diabetic patients, for whom everything seems to be amplified. And all of this happens at the 20-mg dose, not at the 5 (which doesn't do much for weight, either, as noted above). And for those who are wondering, yes, on my first pass through the data, I find these statistics much more convincing than I did the ones on the Avandia (rosiglitazone) association with cardiac events.
I had my worries about rimonabant a long time ago, but not for any specific reason. It's just that I used to work on central nervous system drugs, and you have to be ready for anything. Any new CNS mechanism, I figured, might well set off some things that no one was expecting, given how little we understand about that area.
But isn't it good to finally hear what the arguing is about? Sanofi-Aventis has been relentlessly tight-lipped about everything to do with the drug. I can see why, after looking at the FDA documents, but this isn't a problem that's going to go away by not talking about it. The advisory committee meeting is Wednesday. Expect fireworks.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | The Central Nervous System | Toxicology
May 24, 2007
Steve Nissen has (once again) made waves with an analysis of cardiovascular risk. This time the subject is Avandia (rosiglitazone), a therapy for diabetes that's the oldest PPAR-gamma drug on the market. A meta-analysis of 42 reported clinical trials of the drug led to the conclusion that rosiglitazone is associated with a statistically significant risk of cardiac events.
The similarities to the Vioxx situation are what have made headlines (and what sent GlaxoSmithKline's stock down about 8% on the day the paper was released). But there are some important differences. Merck's ran into the Vioxx numbers in their own clinical data - the arguing has been whether they recognized the effects earlier (or should have), but it was a specific trial of theirs that led to the statistics that sank the drug. A meta-analysis is a much different beast, since you're trying to fit a large number of different trials, run in different ways for different reasons, into the same framework. Not everyone trusts them, even when the analysis is performed by someone as competent as Nissen, who does mention the limitations of the approach in the paper:
"Our study has important limitations. We pooled the results of a group of trials that were not originally intended to explore cardiovascular outcomes. Most trials did not centrally adjudicate cardiovascular outcomes, and the definitions of myocardial infarction were not available. Many of these trials were small and short-term, resulting in few adverse cardiovascular events or deaths. Accordingly, the confidence intervals for the odds ratios for myocardial infarction and death from cardiovascular causes are wide, resulting in considerable uncertainty about the magnitude of the observed hazard. Furthermore, we did not have access to original source data for any of these trials. Thus, we based the analysis on available data from publicly disclosed summaries of events. The lack of availability of source data did not allow the use of more statistically powerful time-to-event analysis. A meta-analysis is always considered less convincing than a large prospective trial designed to assess the outcome of interest."
And that's what's happening here. A number of people at large diabetes treatment centers aren't ready to buy into a cardiovascular risk for Avandia yet, because they're wary of the statistics. There's a large cardiovascular outcome trial of the drug going on now, which won't wrap up until 2009, but several people seem to want to wait for that as a more definitive answer.
If Nissen's data hold up - and statistically, I'm definitely not up to the task of evaluating his approach - then we might be looking at a Vioxx-like risk level. Out of some 14,000 patients on the drug in the various studies, there were 86 heart attacks in the treatment groups, and 72 in the controls. That comes out to be statistically significant, but (as you can see) the problem is that Type II diabetics have a high background rate of CV problems. Looking at Nissen's Table IV, it also seems clear that most of the significance he's found comes from the pooling of the smaller studies. The larger trials are nowhere near as clear-cut, which makes you wonder if this effect is real or an artifact.
I'm certainly not prepared to say one way or another, and I just hope that the ongoing trial settles the question. It's certainly not unreasonable to imagine a PPAR gamma drug having this side effect, but if this were a strong mechanism-based phenomenon the numbers would surely be stronger. If a risk is confirmed, though, we'll then be faced with a risk-benefit question. Does the glycemic control that Avandia provides lead to enough good outcomes to offset any cardiovascular risk over a large population? If you think getting the current numbers is a tough job, wait until you try to work that one out.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | Toxicology
May 9, 2007
You'd think that I'd have more to say on the new bill giving the FDA a broader drug safety mandate. It's certainly all over the news as I write. But this seems (to me, anyway) to be a change in degree more than anything else. Congress is asking the agency to do what it does - monitor drug safety, change label wordings if necessary, etc. - but to do more of it. I don't have a problem with that, because this sort of thing will be all the execution, and we don't know how that's going to go yet.
And the other reason I'm not as revved-up as I could be about this is that I know that the raw material for everything the FDA does isn't changing a bit. By that I mean all the drugs that get sent in for approval, as well as the ones already on the market. Altering the way that data is collected isn't going to change the underlying data. I don't see how the pharmacopeia is going to be much safer (or much more dangerous) compared to how it is today. Perhaps some safety decisions will be made more quickly, but it's also possible that a flood of new information might obscure some things in the noise.
We know just as much about toxicology as we did yesterday - a lot, from one perspective, but not nearly enough, from another. People who believe that this regulatory change are more likely to believe that we know a lot about side effects and toxicity, and that if we just pay more attention we'll improve the situation greatly. But this isn't a view that I can endorse. As far as I can see, regulatory oversight isn't the limiting reagent here. It's knowledge, and that comes on slowly and in defiance of every lawmaker that ever was.
+ TrackBacks (0) | Category: Toxicology
February 7, 2007
When a drug candidate runs into toxicity trouble, the first question that comes to everyone's mind in the lab is: mechanism-based or not? If the project is a follow-on compound to something that's already made it to the market, the answer is probably already clear - after all, if the first one was clean, why shouldn't the second one be?
But if you're working on a new target, this is a major headache, with major implications either way. If the tox is related to the compound's mechanism of action, you're probably going to have to abandon the compound, and perhaps abandon any hope of a follow-up while you're at it. A really solid link to trouble can kill a target for you and for everyone else in the industry. That sound like bad news, and in the short run it probably is - but in the long run, it's better to know these things. There are enough things to waste time on already, so getting rid of one isn't such a catastrophe, unless it's your own drug.
On the other hand, if the toxicity isn't mechanism-based, then it's likely due to something odd about the particular compound, some off-target effect that it has. Chasing these things down can be extremely difficult, and often there's no way to really tell what went wrong. You just have to move along another compound, from a different structural series if possible, and hold your breath. At least you know what to look for first. But there's always the horrible possibility that the follow-up compound will show an equally ugly but completely different tox profile, which brings on thoughts of truck-driving school, where you at least would know what the hell is going on.
Of course, the usual reservations apply here (toxicology is full of these). For example, it's always possible that the compound is toxic in one species, but not in another. Happens all the time, actually. But in that case, you'd better have a really, really plausible reason why humans are on the safe side of the line, and convincing ones can be hard to find. Maybe all the problems are caused by a metabolite, and not by the original drug (that one's far from unknown, too). Back to the lab you'll go for that one, too, because you don't know how human will react to the metabolite, and you can't be quite sure how much of it they'll produce relative to the animals, anyway.
Barring these, though, either the compound is dead, or the whole structural class of compounds is dead along with it, or the whole universe of compounds that work the same way is dead. None of those are necessarily appealing, but those are the main choices, and there's nothing written down - anywhere - that says that you have to get one that you like.
+ TrackBacks (0) | Category: Drug Development | Toxicology
December 6, 2006
Several people have remarked on how large and greasy a molecule torcetrapib is, and speculated about whether that could have been one of its problems. Now, I have as much dislike of large and greasy molecules as any good medicinal chemist, but somehow I don't think that was the problem here.
For the non-medicinal-chemists, the reason we're suspicious of those things is that the body is suspicious of them, too. There aren't all that many non-peptidic, non-carbohydrate, non-lipid, non-nucleic acid molecules in the body to start with - those categories take care of an awful lot of what's available, and they're all handled by their own special systems. A drug molecule is an interloper right from the start, and living organisms have several mechanisms designed to seek out and destroy anything that isn't on the guest list.
An early line of defense is the gut wall. Molecules that are too large or too hydrophobic won't even get taken up well. The digestive system spends most of its time breaking everything down into small polar building blocks and handing them over to the portal circulation, there to be scrutinized by the liver before heading out into the general circulation. So anything that isn't a small polar building block had better be ready to explain itself. There are dedicated systems that handle absorption of fatty acids and cholesterol, and odds are that they're not going to recognize your greaseball molecule. It's going to have to diffuse in on its own, which puts difficult to define, but nonetheless real limits on its size and polarity.
Then there's that darn liver. It's full of metabolizing enzymes, many of which are basically high-capacity shredding machines with binding sites that are especially excellent for nonpolar molecules. That first-pass metabolism right out of the gut is a real killer, and many good drug candidates don't survive it. For many (most?) others, destruction by liver enzymes is still the main route of clearance.
Finally, hydrophobic drug molecules can end up in places you don't want. The dominant solvent of the body is water, of course, albeit water with a lot of gunk in it. But even at their thickest, biological fluids are a lot more aqueous than not, especially when compared to the kinds of solvents we tend to make our molecules in. A hydrophobic molecule will stick to all sorts of things (like the greasier exposed parts of proteins) rather than wander around in solution, and this can lead to unpredictable behavior (and difficulty getting to the real target).
That last paragraph is the one that could be relevant to torcetrapib's failure. The others had already been looked at, or the drug wouldn't have made it as far as it did. But the problem is that for a target like CETP, a greasy molecule may be the only thing that'll work. After all, if you're trying to mess up a system for moving cholesteryl esters around, your molecule may have to adopt a when-in-Rome level of polarity. The body may be largely polar, but some of the local environments aren't. The challenge is getting to them.
+ TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Pharmacokinetics | Toxicology
December 4, 2006
One thing that the Pfizer debacle makes you wonder about is: were they trying too hard? Torcetrapib seems to have done a fine job raising HDL on its own - so it was only natural to think of combining it with an LDL-lowering statin. If it turns out, though, that the fatal problems that have turned up were the result of the combination therapy, what then? Will the story be that Pfizer brought the roof down on itself by trying to extend the profitable lifetime of Lipitor?
It turns out that we can answer that question. What if the compound had been developed by a company that didn't have a statin of its own to promote? We don't have to wonder: that's the situation with the Roche/JTT compound. Roche has no statin in its stable. But when you look at the trials they they've been running, well. . .
. . .patients will be randomized to receive either CETP inhibitor (900mg po) or placebo po daily for 24 weeks, with concomitant atorvastatin 10 to 80 mg daily. . .
. . .This study will evaluate the efficacy and safety of three doses of CETP Inhibitor when co-administered with pravastatin. . .
. . .Patients eligible to participate in the extension study will continue on the treatment they were originally assigned to ie CETP inhibitor (900mg po) or placebo daily, with concomitant daily atorvastatin (10 to 80mg po). . .
So why the constant statin drumbeat? There's actually a good reason. As it happens, monotherapy trials of torcetrapib seemed to show that it could lower LDL a bit on its own - but only in patients without high triglycerides. Unfortunately, most of the patient population for the drug has high triglycerides, so there you are. You could always try to make the argument that HDL elevation alone might be beneficial, but no one's quite sure if that would be enough, especially given that lowered LDL has been shown to be beneficial in cardiac outcomes.
Roche, of course, is at the moment just packed with people who'd like to know what (if anything) there is about the statin/CETP combination that could turn awful. I wonder how long it'll be before we find out?
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
November 27, 2006
So, what actually happens, down at the molecular and cellular level, when a person is exposed to alpha radiation? If it’s coming from outside the body, not all that much. The outer layer of dead skin cells is enough to soak up most of the damage, and it’s not like alpha particles can make it that far through the air, anyway. This is good news for Londoners worried about exposure (I note that reports have at least three sites there showing traces of radioactivity). I strongly discourage anyone from standing around next to an alpha source, but there are a lot worse things that you can stand next to - a gamma or fast neutron source, for example, either of which will penetrate your tan and keep on going.
But inside the body, that’s a different story. Alexander Litvinenko was given polonium in his food or drink, and from there the stuff distributes fairly widely across many tissues. At lower radioactive doses, that pattern is probably a good thing. When you have a radionuclide that concentrates in a particular tissue, like iodine in the thyroid, a dose that would be bearable across the entire body can cause a lot of local damage when it piles up. At higher doses, though, the situation can flip around. People can survive with damaged thyroid glands, or after total bone marrow transplants or the like. But general tissue damage is much harder to deal with.
Polonium ends up concentrating in the kidneys, to the extent that it concentrates anywhere, and attempts have been made to minimize radiation damage there. But by then an awful lot of destruction has occurred elsewhere – the blood-forming tissues, the linings of the gastrointestinal tract and the blood vessels themselves, and others. Note that these are all fast-dividing cell populations.
Zooming in, the mechanisms for all that mayhem are complex, and they’re still not completely understood. The first thing you can imagine is the alpha particle smacking into something, which to a first approximation is exactly what happens. They don’t get far – less than 100 micrometers. But along the way they can bash into quite a few things, losing some energy each time, which shows up as flung-off electrons, various strengths of photons, and doubtless some good old kinetic bouncing around. Eventually, when the particle slows down enough, it drags off a couple of electrons in passing and settles down as a peaceful atom of helium. That leaves some positive charges to account for, though, since those electrons were otherwise employed before being press-ganged, and this ionization (along with that caused by those stray electrons along the way) is one of the major sources of cellular damage.
All this can take place either in the nucleus or out in the cytoplasm, with different effects. This sort of thing can damage the cell's outer membrane, for one thing, which can lead to trouble. In the nucleus, one of the more dramatic events is sudden double-strand DNA breakage. That's never a good thing, since the strands don't always get put back together correctly. A couple of years ago, a group from the Netherlands was able to come up with dramatic images of chromosome breakage along the tracks made by alpha particles in living cells.
Then there’s also the complication of the “bystander effect”. Untouched cells in the vicinity of one that has taken an ionizing radiation hit also show changes, which seem to be at least partly related to an inflammation response. This seems to happen mostly after damage to the nucleus.
All this focused destruction has long since drawn the attention of people who actually want to kill off cells, namely oncology researchers. Alpha sources conjugated to antibodies are a very big deal in cancer treatment, and a huge amount of work is going on in the area. The antibodies can, in theory, deliver the radiation source specifically to certain cell types, which soak up most of the exposure.
So there's a use for everything. But one of those uses, this time, was assassination. Alexander Litvinenko's killers knew exactly what they were doing, and exactly what would happen to him. I hope that they're eventually found and dealt with proportionately.
+ TrackBacks (0) | Category: Chem/Bio Warfare | Current Events | Toxicology
I was going to write up a piece on thallium poisoning, until word came out over the long weekend that the Russian spy case was instead an instance of polonium poisoning. That's a very different matter indeed.
For starters, polonium isotopes (like most radioactive substances) are much more hazardous as radiological agents than as chemical ones. Unraveling the two isn't always easy, but this case is pretty clear. It's likely that polonium is chemically toxic, since it's in the same series as selenium and tellurium (which both are), but it's also likely that any reasonable dose would kill a person from alpha radiation rather than from whatever enzyme inhibition, etc., that might also ensue. People have been dosed with fairly robust amounts of tellurium and survived, albeit uncomfortably, but I can't imagine that anyone has been exposed to a systemic dose of a hard alpha emitter and pulled through.
This takes us into the long-standing arguments about the toxicity of such isotopes. Readers who remember the anti-nuke days of the 1970s and 80s may recall the statements about plutonium's incredible toxicity, generally expressed in terms of how miniscule an amount would be needed to kill every human being on the planet. Left unsaid in those calculations was that said plutonium would have to be dosed intermally in some bioavailable form. More Pu was surely vaporized in the atmospheric bomb tests of the 1950s, without depopulating the Earth to any noticeable extent. (See the arguments here, for example).
Here, though, we have a case of that exact bioavailable dosing of a strong radioisotope, with the unfortunate effects that you'd predict. There were some experiments early in the atomic research era where patients were dosed with radioactive isotopes. Oddly, the polonium experiments may have been the only ones that stand up to ethical scrutiny. A good review of what's known about polonium exposure, at least as of 1988, can be found here.
One thing that many people may not realize is that every person on the planet has some polonium exposure. There are many people who equate "radioactive" with "man-made", but those categories don't completely overlap. Polonium is a naturally occurring element, although certainly not in high abundance, but there's enough for Marie Curie to have isolated it. It's part of the radioactive decay series of U-238, and as a daughter radionuclide is a contributor to the toxicity of radium and radon exposure. You've had it - but not like this.
+ TrackBacks (0) | Category: Current Events | Toxicology
November 17, 2006
OK, here's a stumper: are there any anti-inflammatory medications that don't have cardiovascular side effects? Aspirin's GI bleeding effects have been known for decades, various NSAIDs have had warnings turn up over the years, the COX-2 drugs are somewhere over there in that huge cloud of legal and statistical dust, and now a study says that one of the last left standing, naproxen, may have cardiac effects as well.
Or does it? This is an attempt to get some useful data out of a large Celebrex trial, looking to see if it had protective effects against Alzheimer's disease, and the whole thing was stopped early when all the COX-2 cardiovascular risks became an issue. As this article makes clear, Steve Nissen isn't convinced, and he's not a person who keeps his worries about drug safety all bottled up inside. His point is that the trial's statistical validity was ruined by the early halt, and that larger epidemiological studies don't back up its conclusion. I should note that he's now running a massive trial addressing this issue as well.
Contrast that with this quote from one of the new paper's authors:
"Particularly for safety data, 'truth' may come in small doses. We firmly believe that results from trials should be published regardless of the direction, magnitude, or statistical significance of the observed results," said Barbara Martin of the John Hopkins University School of Public Health, who worked on the study.
Let me tell you, that kind of thing makes me very nervous. Regardless of the magnitude or statistical significance? That's not a way to arrive at the truth. That's a tornado passing over a pig farm
+ TrackBacks (0) | Category: Cardiovascular Disease | Toxicology
September 24, 2006
I see that Dylan found an old bottle of L-DOPA in his stockroom - I'd handle that one with gloves, but that's the medicinal chemist in me talking. He segues into a discussion of the MPTP story, which I talked about here a while back. Every med-chemist who's done work on central nervous system drugs knows the story, in my experience.
But that knowledge doesn't seem to be universal. I once, some years ago, had a lab associate from another group mention to me casually that he'd just made a batch of an intermediate, which when he drew it out on the hood sash, turned out to be the para-bromo analog of MPTP. I couldn't believe my eyes, and I stared at him in horror, wondering if this was some sort of joke. "You what?" It was then his turn to stare at me, wondering what was wrong. He had never heard of MPTP, of the irreversible Parkinson's syndrome that it causes, had no idea that there was a problem, and so on.
We established that he'd made a good-sized load of the stuff, but that he hadn't been handling it to any great degree (and had been wearing gloves when he worked up the reaction). I put the fear into him, warning him under no circumstances to touch the stuff or mess with any glassware involved, and contacted the toxic waste disposal folks. They charge quite a bit to haul things like that away, I think.
In the meantime, I read up on the structure-activity relationships that had been worked out for these compounds. A key paper by Mabic and Castagnoli in J. Med. Chem. (39, 3694) showed that the 4-bromo compound was, unfortunately, an "excellent substrate" for MAO-B, the enzyme that turns these structures into the neurotoxic species, so odds were excellent that the compound was trouble.
But not once it was taken away and destroyed, anyway. The person who made it developed no symptoms over the next couple of years that I was able to observe him, as far as I could see. (And I believe that you need a pretty good internal dose to get into trouble - light skin contact probably won't do it). Memos went out to everyone reminding them of these structures and why they shouldn't be messed with. But I still wonder how many people might stumble across these compounds and whip up a batch of something that shouldn't be made. That's another argument for electronic lab notebooks. You could set the things to start honking and flashing if you entered such a target structure into them, to alert the clueless. . .
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
September 1, 2006
So, how are things going with the Merck/Vioxx situation? Short answer: confusingly. Earlier this month, Merck's winning verdict from last November was thrown out, and the whole case will have to be retried next year. To balance that out a bit, this week one of the company's big losses had its $51 million dollar award overturned - the judge says that Merck is still liable, but that the award was "grossly excessive".
To add to the uncertainty, another Merck loss in RioGrande City, Texas is coming under scrutiny. Today's Wall Street Journal has a front-page article (subscription link) reporting that one of the jurors in that trial had a history of borrowing money (thousands of dollars worth) from the plaintiff, a fact that certainly didn't come out during the voir dire. Merck, needless to say, is looking into having this verdict thrown out as well.
All this makes it impossible to say just what's happening. The reversals (and coming re-reversals, for all I know) just make it even harder to grasp. Even without these backtracks, the whole business is moving on a slower-than-human-attention-span time scale, which is the problem with a lot of important issues. Watching grass grow and paint dry can actually be very useful, but we're not equipped to do it very well.
For the time being, anyway, the flood of damaging information and bad decisions coming from Merck's side seems to have receded, perhaps because there wasn't much left to accomplish in that line. Interestingly, if you were a Merck shareholder before the Vioxx disaster hit, you're still underwater - but if you bought afterwards, you've done extremely well. This seems to reflect (understandable) panic at first, followed by relief that the company was capable of winning a case or two and not immediately disappearing beneath the flood waters. But if we're going to try every case at least twice, I don't see the stock making much headway for a while.
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Toxicology
June 26, 2006
The latest round in the fit-to-never-end saga of the Vioxx APPROVe trial and the New England Journal of Medicine is here. The journal today released a correction of the orginal paper, perspective article on the statistics of the original study, and some inconclusive correspondence about the (recalculated) risks.
The correction is notable for removing the earlier statements that it appears to take 18 months for risk to develop in the study's Vioxx patient group. And since Merck's made a big deal out of that timing, this has already become the headline story. (I can recommend this overview by Matthew Herper at Forbes).
The perspective article, by Stephen Lagakos of Harvard, may be fairly heavy going for someone who doesn't who isn't statistically inclined. I include in that group - please correct me if I'm wrong here - the great majority of newspaper reporters who might be covering the issue (Herper and a few others excepted). I'm no statistician myself, but I spend more time with the subject than most people do, so I'll extract some highlights from Lagakos's piece.
He has a useful figure where he looks at the two incidence curves for the Vioxx and placebo groups. These are the curves that have been the source of so much controversy: whether or not there was an increased risk after 18 months of Vioxx therapy or not, or if the risk was clear from the outset, and so on. As Lagakos points out, in a slap at Merck's public treatment of the graphs:
"It may then be of interest to assess how the cumulative incidence curves might plausibly differ over time. Doing so by means of post hoc analyses based on visual inspection of the shapes of the Kaplan-Meier curves for the treatment groups can be misleading and should be avoided. A better approach is to create a confidence band for the difference between the cumulative incidence curves in the treatment and placebo groups - that is, for the excess risk in the treatment group."
He does just that, at the 95% confidence level. What it shows is that well past the disputed 18-month point, the 95% confidence band still contains the 0% difference line, and there's room around it on both sides. As he summarizes it:
"The graph shows that there are many plausible differences, including a separation of the curves at times both before and after 18 months, and a consistently higher or lower cumulative incidence in the rofecoxib group, relative to the placebo group, before 18 months."
In other words, the data don't really add much support to anyone's definitive statements about Vioxx risks before 18 months. The 95% band only widens out to a plus or minus 1% difference in cumulative incidence rates at a time between 18 and 24 months. At that point, the upper and lower bounds are both creeping up, though, but the band only rises to an all-positive difference between the two groups at the 30-month mark. By the 36-month point, the last in the study, the 95% confidence band is between a 1% and a 4.5% risk difference for Vioxx therapy compared to placebo.
This doesn't help Merck - in fact, since they've made such a lot of noise about this 18-month threshold, it does them quite a bit of damage. But it doesn't directly help the plaintiffs who are suing them, either - the good news for them is that Merck is looking bad again.
Lagakos goes on to talk about what these demonstrated long-term risks can tell us about short-term ones. Assuming that the risk for, say, 12 months of Vioxx is somewhere between the placebo group and the 36-month figure (a reasonable assumption), these figures will set the upper and lower bounds. The most optimistic outcome, then, is that 12 months of Vioxx does nothing to you at all, compared to placebo, even after another two years of observation. And the most pessimistic outcome is that the Vioxx you took continues to increase your risk the same as if you'd been taking it the whole three years (a damage-is-already-done scenario). Although Lagakos doesn't name these as such, you could call these two boundries the Merck line and the Trial Lawyer line, because they correspond to what each side would fervently like to believe is true.
Combining this with his 95% confidence band plot, you end up with a figure that shows that, within 95% confidence, the excess risk for a 12-month treatment could still range anywhere from zero up to the worst that was seen in the full-term-treatment group. So, because this range still includes the no-effect outcome, you can't conclude that a shorter course of Vioxx was harmful. But because it includes the data of the out-to-three-year group, you can't conclude it's safe, either. And that's really the best you can do. If you're not willing to make those starting assumptions, you can't really say anything about the shorter courses of treatment at all.
This is, I think, a valid way of looking at the controversy, but in the end, it's not going to satisfy anyone. It makes me think that both Merck and the lawyers going after them will either: (a) pick their favorite sections from this article and beat each other with them like pig bladders, or (b) ignore it completely. (I think that the first one is already happening, with the advantage, for now, to the lawyers). If Merck can make a successful counterattack that the data don't show that Vioxx was harmful for shorter doses, either, perhaps they can get something out of this. That depends, of course, on people believing a single word that they say. Which they're making more difficult all the time.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
June 20, 2006
A comment to the last post asked a good question, one that occurs to everyone in the drug industry early in their career: how many useful drugs do we lose due to falsely alarming toxicity results in animals?
The answer is, naturally, that we don't know, and we can't. Not in the world as we know it, anyway. The only way to really find out would be to give compounds to humans that have shown major problems in rats and dogs, and that's just not going to happen. It's unethical, it's dangerous, and even if you didn't care about such things, the lawyers would find some thing you did care about and go after it.
But how often does this possibility come up? Well, all the time, actually. I don't think that the industry's failure rates are well appreciated by the general public. The 1990s showed that about one in ten compounds that entered Phase I made it through to the market, which is certainly awful enough. But rats and dogs kill compounds before they even get to Phase I, and the failure rate of initiated projects making it to the clinic at all is much higher.
So it's not like we take all these rat-killers on to humans, despite what the lunatic fringe of the pharma-bashers might think. Nope, these are the safe ones that go on to cause all the trouble. "Oh, but are they?" comes the question. "How do you know that your animal results aren't full of false green lights, too?" That's a worrisome question, but there are a lot of good reasons to think that the things we get rid of are mostly trouble. For all the metabolic and physiological differences between rodents, dogs, and humans, there are even more important similarities. The odds are that most things that will sicken one of those animals are going to land on a homologous pathway in humans. And the more basic and important the pathway is, the greater the chance (for the most part) that the similarities will be still be strong enough to cause an overlap.
But there are exceptions in both directions. We know for a fact that there are compound that are more toxic to various animal species than they are to humans, and vice versa. But we play the odds, because we have no choice. Whenever a compound passes animal tox, we hope that it won't be one of the rare ones that's worse in humans. But when a compound fails in the animals, there's simply no point in wondering if it might be OK if it were taken on. Because it won't be.
+ TrackBacks (0) | Category: Animal Testing | Clinical Trials | Toxicology
June 19, 2006
So, you're developing a drug candidate. You've settled on what looks like a good compound - it has the activity you want in your mouse model of the disease, it's not too hard to make, and it's not toxic. Everything looks fine. Except. . .one slight problem. Although the compound has good blood levels in the mouse and in the dog, in rats it's terrible. For some reason, it just doesn't get up there. Probably some foul metabolic pathway peculiar to rats (whose innards are adapted, after all, for dealing with every kind of garbage that comes along). So, is this a problem?
Well, yes, unfortunately it is. Rats are the most beloved animal of most toxicologists, you see. (Take a look at the tables in this survey, and note how highly the category "rodent toxicology" always places). More compounds have gone through rat tox than any other species, so there's a large body of experience out there. And the toxicologists just hate to go without it. Now, a lot of compounds have been in mice, for sure, but they just aren't enough of a replacement. The two rodent species don't line up as well as you'd think. And there's no other small animal with the relevency and track record of the noble rat. (People outside the field are sometimes surprised to learn that guinea pigs aren't even close - they get used in cardiovascular work, but that's about it).
So if your compound is a loser in the rat, you have a problem. You can pitch to go straight into larger animals, but that's going to be a harder sell without rat data. If your project is a hot one, with lots of expectations, you'll probably tiptoe into dog tox. But if it's a borderline one, having the rats drop out on you can kill the whole thing off. They use up a lot of compound compared to the mouse, they're more likely to bite your hand, and they're an order of magnitude less sightly. But respect the rat nonetheless.
+ TrackBacks (0) | Category: Animal Testing | Toxicology
May 17, 2006
I mentioned yesterday that my opinion of Merck and their handling of the Vioxx cases isn't very high these days. The reason for this is the press release that the company sent out a few days ago on follow-up data to the APPROVe study, which is the one that caused the company to withdraw Vioxx in the first place.
That study was looking at possible use of Vioxx for the prevention of precancerous colon polyps. That may sound slightly insane if you're not following the field, but there's some biochemical rationale that suggests a role for inhibition of COX-2 against colon cancer. (This would be another huge market, naturally, which is why Merck - and Pfizer - have both looked into it). As the world knows, the study also showed clear evidence of an increased cardiovascular risk after 18 months of Vioxx use., and that's what started us all on the bumpy road to where we are today.
The APPROVe study was designed to have a one-year follow-up period to evaluate how long any colon-related benefits persisted. Unfortunately, it wasn't really designed (or powered, as the clinicians say) to address cardiovascular safety, so everyone just has to take what they can from the data we have. Merck, naturally, takes the current data to mean that Vioxx is doing just fine. They point out that in the post-drug follow-up year, the cardiovascular risk for the group that was taking Vioxx doesn't seem to be statistically different from the group that had been taking placebo.
Which is fine, as far as it goes. A more objective look at the data, though, show that they didn't miss statistical significance by all that much. The numbers seem to be all against Vioxx, which is enough to make you wonder if the lights would have truly flashed red in a more statistically appropriate study. As it is, Merck is in the position of saying that a study which wasn't expected to show a statistical difference between Vioxx and placebo heart safety didn't show a difference - and that that's good news.
Even if the numbers had gone the company's way, statistical arguments are a notoriously hard sell for the defense in front of a jury. Having a bunch of muddy but trending-ugly data is one of the worst things that could have happened to Merck, actually. No one knows, from these numbers, just when the effect of Vioxx on cardiovascular risk might wear off. It's a playground for the lawyers - can't you just hear it? "Isn't it true that more patients had heart attacks on Vioxx? Even during the year after they'd stopped taking the drug? No, no, I didn't ask you for a lesson in statistics - just tell me if more people had heart attacks or not!"
No, no courtroom help there. I hope, for Merck's sake, that no one at the company believes there is, and that no one's charging them by the hour to try to convince them otherwise. At this point, they're going to need something better, and I'm not sure where they're going to get it. It's past the time when we can usefully argue about whether Vioxx should have been withdrawn, about what its risk-benefit ratio is, and whether Merck should be facing thousands of lawsuits or not. They are, and more than this latest batch of data will be needed to fight them.
(See also Jim Hu's comments).
Update: According to today's WSJ, things have gotten even muddier. Here's the subscriber link, and this is a Reuters summary.
+ TrackBacks (0) | Category: Cardiovascular Disease | Toxicology
May 16, 2006
The Wall Street Journal ran an interesting article by David Armstrong the other day on the New England Journal of Medicine and the Merck/Vioxx affair. It's subscriber-only on the WSJ site, but the Pittsburgh Post-Gazette picked it up here. It brings up an angle that I hadn't completely considered:
While Merck has taken the brunt of criticism in the affair, the New England Journal's role in the Vioxx debacle has received little attention. The journal is the most-cited medical publication in the world, and its November 2000 article on Vioxx was a major marketing tool for Merck. . .Internal emails show the New England Journal's expression of concern was timed to divert attention from a deposition in which Executive Editor Gregory Curfman made potentially damaging admissions about the journal's handling of the Vioxx study. In the deposition, part of the Vioxx litigation, Dr. Curfman acknowledged that lax editing might have helped the authors make misleading claims in the article. He said the journal sold more than 900,000 reprints of the article, bringing in at least $697,000 in revenue. Merck says it bought most of the reprints.
The article goes on to detail the role of a public relations consultant in the release and timing of the "Expression of Concern", which I've expressed my own concerns about. The journal seems to have been worried about its own name, and seeking to put the focus back on Merck. And some of these efforts may have gone a bit over the line. Remember the infamous missing data?
Perhaps the most sensational allegation in the journal's expression of concern was that the authors of the November 2000 article deleted heart-related safety data from a draft just two days before submitting it to the journal for publication. The journal said it was able to detect this by examining a computer disk submitted with the manuscript.
The statement was ambiguous about what data the authors deleted, hinting that serious scientific misconduct was involved. "Taken together, these inaccuracies and deletions call into question the integrity of the data," the editors wrote.
In reality, the last-minute changes to the manuscript were less significant. One of the "deleted" items was a blank table that never had any data in it in article manuscripts. Also deleted was the number of heart attacks suffered by Vioxx users in the trial -- 17. However, in place of the number the authors inserted the percentage of patients who suffered heart attacks. Using that percentage (0.4 percent) and the total number of Vioxx users given in the article (4,047), any reader could roughly calculate the heart-attack number. . .
. . .Many news organizations, including The Wall Street Journal, misunderstood the ambiguous language and incorrectly reported that the deleted data were the extra three heart attacks -- which, if true, would have reflected badly on Merck. The New England Journal says it didn't attempt to have these mistakes corrected.
So, the matter of the missing heart attacks, which was the subject of a lot of heated language around here, appears to be closed. This sheds an interesting light on last December's "reaffirmation" of concern, where the NEJM made so much of the heart attack data and how it should have been included. Just about everyone who read that came away thinking that the whole fuss was about the deletion of the three MI events in the Vioxx treatment group. As you'll see from the comments to that post, many of us spent our time arguing about whether they should have been included or not, what the clinical cutoff date was, and so on.
We could have saved our breath. The heart attacks weren't deleted from the manuscript, and those who thought that they had been were responding to a well-thought-out public relations campaign. My opinion of the NEJM is not being enhanced by these revelations, let me tell you.
Problem is, my opinion of Merck isn't at its highest level these days, either. More on that tomorrow. . .
+ TrackBacks (2) | Category: Cardiovascular Disease | The Dark Side | The Scientific Literature | Toxicology
February 22, 2006
The original "Expression of Concern" editorial over the VIGOR Vioxx trial in the New England Journal of Medicine was an odd enough document already. But today brought an "Expression of Concern Reaffirmed" in the journal, along with replies from the VIGOR authors.
It's going to take some doing to get these folks together, as you'll see. The NEJM's editors, in their "reaffirmation", add a few details to their December 8th expression. Their position is still that there were three heart attacks in the Vioxx treatment group that were not in the data submitted to the journal. And they're not buying the explanation that these took place after the end of the study, either:
"The authors state that these events did occur during the trial but did not qualify for inclusion in the article because they were reported after a "prespecified cutoff date" for the reporting of cardiovascular events. This date, which the sponsor selected shortly before the trial ended, was one month earlier than the cutoff date for the reporting of adverse gastrointestinal events. This untenable feature of trial design, which inevitably skewed the results, was not disclosed to the editors or the academic authors of the study."
Those academic authors (11 of them from seven different countries, led by Claire Bombardier of Toronto) have a reply to all this in the same issue. Regarding those three MI events, they say:
"The VIGOR study was a double-blind, randomized outcomes study of upper gastrointestinal clinical events. We, as members of the steering committee, approved the study termination date of February 10, 2000, and the cutoff date of March 9, 2000, for reporting of gastrointestinal events to be included in the final analysis. Comparison of cardiovascular events was not a prespecified analysis for the VIGOR study. . .the independent committee charged with overseeing any potential safety concerns recommended to Merck that a data analysis plan be developed for serious cardiovascular events. . .As a result, a cardiovascular data analysis plan was developed by Merck. Merck indicated that they chose the study termination date of February 10, 2000, as the cutoff date. . .to allow sufficient time to adjudicate these events. . . (The three events) were neither in the locked database used in the analysis for the VIGOR paper no known to us during the review process. However, changing the analysis post hoc and after unblinding would not have been appropriate."
The authors go on to say that including the three heart attacks does not, in their view, change the interpretation of the safety data. They also take issue with the journal's contention that the three events were deleted from the manuscript, saying that the table of cardiovascular events in the presubmission draft of the paper never included them in the first place.
The two Merck authors on the paper, in a separate letter, make the same point, and also mention that there was an additional stroke in the naproxen-treated group that didn't make the paper for the same reasons. They reiterate that including the three heart attacks wouldn't have changed anything:
". . .The article clearly disclosed that there was a significant different in the rates of myocardial infarction in the Vioxx and naproxen arms of the study and reported these rates as 0.4 and 0.1, respectively, with a relative risk reported as 0.2. The inclusion of the post-cutoff data myocardial infarctions changes the Vioxx rate to 0.5 but does not meaningfully change the relative risk or the conclusion that there was a significant difference between the two arms of the study. Indeed, with such a small number of events (which were not a primary end point of the study) - and with such wide confidence intervals around them - it is difficult to imagine that this small numerical change could affect the interpretation of the data."
Looking at everything together, I'm still coming down on the side of Merck and their academic collaborators in this part of the fight. The post-launch cardiovascular data on Vioxx and its advertising and promotion are worth debating separately, but as for the VIGOR study, I think the NEJM is overreaching. Still, from Merck's viewpoint, I think the damage has already been done. . .
Update: Y'know, it occurs to me that there are a few people who aren't as upset about all this editorial wrangling: the editors of JAMA and the other top-ranked medical journals. They'll be getting some manuscripts that otherwise would have gone to NEJM.
+ TrackBacks (2) | Category: Cardiovascular Disease | The Scientific Literature | Toxicology
January 4, 2006
Via Tyler Cowen at Marginal Revolution I came across this post earlier in the year from a blog called EffectMeasure on the use of rodent models to predict human cancer risks. It's a broadside against the American Council on Science and Health and a petition they filed against the use of high-dose rodent carcinogenicity tests.
Quote the anonymous "Revere":
The main rhetorical lever ACSH employs is the use of high doses in the animal studies, doses that are much higher than usually faced by humans. But as ACSH knows well (but didn't divulge) there is a technical requirement for using these doses. If one were to use doses in animals predicted to cause cancer at a rate we would consider a public health hazard, we would need tens of thousands of animals to test a single dose, mode of exposure and rodent species or strain. This makes using those doses infeasible. Thus a Maximum Tolerated Dose is used, one that causes no other pathology except possibly cancer and doesn't result in more than a 10% weight loss. The assumption here is that something that causes cancer at high doses in these animals will also do so at low doses. This is biologically reasonable. It is a (surprising) fact, that most chemicals, given in no matter how high a dose, won't cause the very unusual and specific biological effect of turning an animal cell cancerous. Cancer cells are not "damaged" cells in the individual sense but "super cells," capable of out competing normal cells. It is only in the context of the whole organism that there is a problem. It is not surprising, then, that very few chemicals would have be ability to turn a normal cell into a biological super cell of this type. Estimates are that is far less than 10%, perhaps only 1% of all chemicals that have this ability. Thus western industrial civilization doesn't have to come to a screeching halt if we eliminate industrial chemical carcinogens from our environment.
We know of no false negatives with this process. Every chemical we know that causes cancer in humans also does so in rodents (with the possible exception of inorganic trivalent arsenic, which is equivocal). The reverse question, whether everything that causes cancer in animals also is a human carcinogen, is not testable without doing the actual natural experiment: waiting to see if people get cancer on exposure, an experiment ACSH is only too happy to conduct on the American people to make their corporate sponsors happy."
I've left out (as did the MR post) the part where he called the ACSH "right wing whores", which is the kind of thing that doesn't enhance the statistical arguments very much. Dropping the invective, I want to take up Tyler Cowen's question: is there anything to this critique? My answer: there might be. But there might not be. It's certainly not as clear-cut as the author would like to make it, cancer epidemiologist though he is, which would seem to be one of the criticisms he's making against the ACSH petition.
Here are some complicating details:
1. The effects of high doses of compounds can be due to their effects on cell division. At such levels, test substances cause irritation and inflammation that promotes cell proliferation. The more cells are forced to divide, the more opportunities there are for the defects that lead to cancer. These effects do not scale well to lower doses. It's the opinion of Bruce Ames (inventor of the Ames test genotoxicity screen) that this problem has completely confounded the interpretation of high-dose animal data. (His article in Angew. Chem. Int. Ed. 29, 1197, 1990 is a good statement of this argument).
2. The statement that "most chemicals, given in no matter how high a dose, won't cause the very unusual and specific biological effect of turning an animal cell cancerous" is not accurate. As Revere surely knows, there are many mutations and pathways that can turn a cell cancerous (which is why I keep harping on the idea that cancer isn't a single disease). Somewhere between one-third and one-half of all synthetic chemicals tested in cell assays or in high-dose animal assays show up as possible carcinogens, depending on your definitions. Interestingly, basically the same proportion of natural products (isolated from untreated foods and other sources) show up as positives, too.
Now, if you want to talk confirmed human carcinogens, then Revere may have a point. There are only some three or four dozen specific chemicals that are confirmed as causes of human cancer. Here's the list. If you read through it, you'll note that many of the 95 agents on it are radioactives or broad categories such as "alcoholic beverages." (Mention should be made of things like nickel, all compounds of which are under suspicion. Check your pockets, though, for your most likely exposure). Specific compounds known as human carcinogens are quite rare. But doesn't that fact support the ACSH's point more than Revere's?
3. Revere's statement that "Cancer cells are not "damaged" cells in the individual sense but "super cells," capable of out competing normal cells. It is only in the context of the whole organism that there is a problem" is also inaccurate. Cancer cells are indeed damaged, right in their growth-regulation and/or apoptosis pathways. A car whose throttle is damaged will run at a higher RPM than a normal model, but I wouldn't call it a "super car". And cancerous cells are often quite recognizably problematic, whole animal or not. They divide like crazy in petri dishes, the same as they do in an animal.
4. The majority of the cancers seen in rat and mouse models are in the liver (which supports the idea that these tumors occur through general strain on their metabolic systems). Human liver cancer is much more rare. The most common human cancer in many countries is lung, caused to a great degree by smoking (which is also likely to have constant-irritant cell-proliferation component). Of the agents on that ICAR list in point #2, only three or four are chemicals (or mixtures) known to induce human liver cancer specifically. This is a significant mismatch.
5. Revere states that "We know of no false negatives with this process. Every chemical we know that causes cancer in humans also does so in rodents. . ." But how about false positives? There are hundreds of compounds that seem to cause cancer in rodents that (as far as we can tell) do not pose a risk to humans. I say "seem to", because these are almost always high-dose studies. But I can even think of some compounds (the PPAR-alpha ligands) that cause all sorts of trouble (including tumors) in rodent livers at reasonable doses, but don't do so in humans. Rodent tox is necessary, but it sure isn't perfect.
There, that should be enough to complicate things. It doesn't make for as dramatic a story as the evil henchmen poisoning America on behalf of their corporate masters, I have to admit. But we'll have to try to get along without the excitement.
+ TrackBacks (0) | Category: Cancer | Toxicology
January 3, 2006
The FDA has recently pulled pemoline (sold in its non-generic days as Cylert) from the market, citing risks of liver toxicity. It's used for narcolepsy, ADHD, fatigue in multiple sclerosis patients, and other indications. In theory, there are several other drugs that are useful for all of these.
In practice, though, we're talking about some poorly understood CNS indications here, and the patient response to drugs of this sort is so heterogeneous that no one understands what's going on. There are people who respond to pemoline that don't respond nearly as well to anything else. And they're an unhappy bunch, because they seem to be willing to take on the liver risks in order to have a drug that works.
One such customer is Teresa Nielsen-Hayden, well-known for the Making Light blog, and this issue has been the hot topic over there, as you'd imagine. You'll notice that the linked blog post takes a rather hostile attitude toward Ralph Nader and his Public Citizen group, because they're taking credit for petitioning the FDA for the drug's removal. The Nielsen-Haydens (no fans of the Bush administration) have been furious at Nader and his people since at least the 2000 election, so this new accomplishment has understandably pushed about all their accessible buttons.
I think they've got a point. I think that if the risks of a given drug are known, that informed patients should have a right to choose that drug's benefits with its risks in mind. Pemoline has had a black-box warning on it for years now, so it's not like its risks have been hidden. Now, it's true that if such a drug remains on the market, some people are going to take it who shouldn't, black box or no black box. But I have to wonder if such people are going to find some other way to get into trouble, no matter how much concerned bureaucracies try to save them.
The situation with Accutane (isotretinoin) and pregnancy is a similar one: there's no way you can miss the fact that it should by a pregnant woman. But every year, some do, despite strenuous efforts to prevent such cases. Mind you, Nader's people have been agitating for years to get that one pulled from the market as well.
Wouldn't a similar registration and/or liver-monitoring regime work for the people who have to have pemoline? Perhaps coupled with some sort of indemnification for its manufacturers? How many more potentially useful medications could we have available under such conditions?
If any of my industry readers have suggestions on where to obtain pemoline, I'd be glad to hear them, although Teresa Nielsen-Hayden and many others would be even happier.
+ TrackBacks (0) | Category: Toxicology
November 21, 2005
I wanted to let everyone know that I have an article up on Medical Progress Today, wondering where all those safe drugs we grew up with have gone. . .you know, aspirin, acetominophen, penicillin, that sort of thing. . .
+ TrackBacks (0) | Category: Toxicology
September 11, 2005
As I sit here typing this evening, my right arm is giving me fits. About a week ago, I had the misfortune, while retrieving a mis-thrown frisbee in the back yard with my two children, of reaching into an area with some poison ivy growing in it. And I've been paying for it ever since.
My immune system has been set off by contact with an alkylated catechol called urushiol (lot of information here.) The stuff penetrates the skin quite well, damn it all. It's simultaneously greasy (with a fifteen-carbon tail on it) and has a polar head group (the catechol), so it just wanders in there will all the other lipid molecules and does its thing. Its thing is to get oxidized to an ortho-quinone, which is probably what does the damage.
Quinones are reactive beasts, which is why we don't put catechol groups (or similar precursors) on drug structures if there's any possible way around it. Some years ago, Merck made a big splash in Science and other venues with a small molecule that affected the signaling of the insulin receptor. That was quite a feat, and worthy of the attention - but the molecule itself, derived from a natural product, was a quinone. I rolled my eyes when I first saw it, as did almost everyone else in the industry, and we were correct - Merck was never able to develop the stuff into a real drug.
A notorious exception to this rule is acetominophen, known also by its brand-name form of Tylenol, which gets metabolized to a reactive quinone-like compound. (There's no way that the compound would be seriously developed today, but that almost certainly goes for aspirin, too, and there you have one reason that it's so hard to run a drug company these days.) The acetominophen metabolite is cleared handily by one of the body's standard systems (glutathione conjugation), but if you take enough of the stuff to deplete your reserves you're in for some serious liver damage.
So, this quinone has soaked right into me and reacted with some of my cell-surface proteins, prompting my immune system to mount a big inflammatory attack. This response isn't just available in your back yard, though: it's a little-known occupational hazard in research labs. There are many classes of reactive compounds that can penetrate the skin enough to cause trouble. If a particular person's immune system finds fault with the result, they end up with dermatitis that's indistinguishable from vigorous, prolonged poison ivy contact.
A friend of mine went through this in graduate school. He had an enone derivative that he'd made before without incident, but one batch caused his forearms to redden and swell briefly. He didn't make the connection, but the next time he came through that part of the synthesis he really got the business. Realizing what was happening, he ended up passing on that step of his synthesis to someone else in the lab. His immune system had become sensitized, and it was impossible to say how bad further exposures could be.
+ TrackBacks (0) | Category: Toxicology
July 27, 2005
Since I was speaking about who might win the first Vioxx trial, I should finish my thoughts. If Merck loses (which they may or may not deserve to), the next question is what sort of damages they should pay out.
It's safe to say that my estimate and that of Mark Lanier, the chief attorney for the plaintiffs, would differ by several orders of magnitude. He would doubtless be overjoyed by, say, a three hundred million dollar award, and would rejoice at the thought of this opening the floodgates for more of the same. I, on the other hand, would be extremely upset at the likely destruction of a great drug company.
What to do? I sometimes think that every prescription in the country should be covered by a blanket waiver, which would go something like this:
"The manufacturer, together with the Food and Drug Administration, have investigated the safety and efficacy of this medication. By allowing it to be dispensed, these parties have agreed that, on the basis of the available data, its use represents a reasonable return in beneficial health effects, as compared to the concomitant risks.
By accepting this medication, you the consumer agree that you have been informed that these risks are real, and that they can include, but are not necessarily limited to, those detailed in the package insert. Although medications are tested in humans before regulatory approval, no such clinical trials can possibly predict what may occur when a new drug is widely dispensed in the general population. Use of this medication will define you as a participant in the post-approval study of this drug, and may affect your legal status to obtain redress from any real or perceived injuries associated with its use.
Should any criminal activity be shown to have occurred during the development or approval of this drug, including but not limited to the presentation of fraudulent data or the witholding of relevant information, this waiver may be voided in whole or in part."
The chances of such a thing coming into force? Snowball, hell.
+ TrackBacks (0) | Category: Toxicology
February 28, 2005
Well, no one's in any doubt about the main thing moving the biotech and pharma stocks today. Biogen/Idec and Elan shares got dragged through the streets and thrown into the river when they announced that they were pulling their multiple sclerosis drug, the weirdly-named Tysabri. (Here's the letter from the companies to physicians, in PDF format.)
In short, a small number of patients who where getting Tysabri along with another Biogen product (Avonex, aka beta-interferon) came down with progressive multifocal leukoencephalopathy, fortunately also known as PML. It sounds bad, and it is. While multiple sclerosis is a disease characterized by autoimmune attack on the white myelin sheathes of nerve tissue, PML is characterized by rapid, severe demyelination - sort of a multiple sclerosis on fast-forward. It's associated with activation of JC polyomavirus, a relatively obscure agent that a majority of adults carry without showing any symptoms at all. Disruption of the immune system (along with some unknown activating event) can turn the virus loose.
All this comes as today's print edition of the Wall Street Journal had a writeup on Biogen/Idec as the best-returning stock of the last ten years. Last year's 80% gain on Tysabri optimism didn't hurt, and the rest of the article is full of glowing predictions of success which are now but fragile fossils. Even if the treatment makes it back to the market, I can't see how it can ever be what it was going to be.
Tysabri's a monoclonal antibody treatment, which is one reason I haven't talked about it much on this site. That's much more biological than chemical, and I've never worked in the antibody field. But the toxicity problems that have cropped up are going to be handled just like the ones that occur with small molecules. Withdrawing Tysabri voluntarily should help some. But all you have to do this evening is type the name into Google, and you'll see that the first batch of ambulance chasers is already on the case. "Free Case Evaluation for Side Effects Victims," they say. There will be more lawyers than victims, and not for the first time. It won't slow them down much.
+ TrackBacks (0) | Category: Toxicology
February 18, 2005
Well, we're finally coming to the end of the FDA's COX-2 marathon. My predictions will be overtaken by reality soon, but I know that a lot of people are following this, so I'll take a crack at it:
I don't think we're going to see any drugs pulled outright from the US market, but if there is one, it'll probably be Bextra. Assuming it stays on the market, I think that it and Celebrex will pick up additional warnings, along with guidance to prescribe it only to patients who can't tolerate the more traditional anti-inflammatories. The FDA has been trying to do that already, and I think we'll see it emphasized again.
And if those two drugs and Vioxx all suffer from the same risks, which I think is likely the case, then why shouldn't Vioxx come back on the market, as Merck's Peter Kim said the other day? He has a point. After all, the recall was voluntary, not FDA-mandated, and as far as I know, there's nothing keeping Merck from bringing the drug back.
As for Merck's follow-up COX-2 (Arcoxia) and Novartis's Prexige, neither of which is available in the US yet, I think that the FDA will want to see major cardiovascular data. Perhaps they'll let them through, eventually, with the same set of warnings and guidelines I expect to see attached to the current drugs. But if the companies want better terms, they're going to have to show some evidence why they should have them.
So we might end up, after all the mud is washed off the walls, with more COX-2 drugs on the market than we have now. Odd, eh? They'll be fighting inside what's supposed to be (and probably will be, in fact) a smaller market, though. The real financial damage has already been done, because this class of drugs will never be what it was a few months ago. Makes you wonder if a big launch of Arcoxia, et al., is even going to be worth the effort.
Which brings up one more point: in the above, I mentioned prescribing COX-2 drugs "only to patients who can't tolerate the more traditional anti-inflammatories", and I'll bet that many people read that and added ". . .just like they should have all along." And it's true that we probably wouldn't have even seen the cardiovascular side effects in that smaller patient population (nor would we have been bombarded with COX-2 ads, for that matter.)
But that's what companies do - try to broaden their market as much as possible. I think that Merck and Pfizer overdid it in the pain market, true. But keep in mind that some of the bad news about these drugs came from trials of them against other possible targets (colon cancer, for example.) The only way we're going to find out if they work in these indications is to run those trials, and finding bad news instead of good is the risk that we in the industry take. I think it's better to try and lose than to never try. Some may disagree. . .
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Toxicology
January 23, 2005
So Johnson and Johnson is the latest company to try to broaden their market for a drug and run into cardiovascular side effects. Their Alzheimer's drug Reminyl (galantamine), makes some money, but is hardly a blockbuster. It's a natural product (derived from daffodil bulbs, of all things), and it's a cholinesterase inhibitor, the same mechanism as the two other Alzheimer's drugs on the market. None of them are gigantic sellers, because they don't do all that much for people, especially once they have serious symptoms. But if you could show beneficial effects in the pre-Alzheimer's population, then the potential number of patient could be much larger. I should, in fairness, point out that the potential benefits to the patients could be larger, too: earlier treatment before the disease has had more time to do irreversible damage.
Cholinesterase inhibition is a pretty crude tool to help Alzheimer's, but it's all that we have at the moment. The idea is the turn up the volume of neuronal signals that use acetylcholine as a transmitter molecule, by inhibiting the enzyme that would break it down and sweep it out of the synapse. I don't see an obvious connection between this mechanism and the cardiovascular effects that showed up in J&J's trial.
This is another illustration of the same thing that's bringing down the COX-2 inhibitors. The larger the population that takes your drug, and the more clinical trials you run, the better your chance of finding the side effects. All drugs have side effects, and if you turn over enough rocks you'll see them. But without expanding the patient population, you won't be helping all the people you could help, and you won't be making all the money you could make. It's like walking through a minefield. It's what we do for a living over here. What a business!
+ TrackBacks (0) | Category: Alzheimer's Disease | Toxicology
December 27, 2004
Well, some of my colleagues around the industry are at work this week, but I'm not. I've been out of touch for a few days, which suits me fine, and I'll likely be out of touch for a few more before work starts up again.
But I wanted to pass along a few links, courtesy of Medpundit: first off, a couple of good editorials on drug safety, from USA Today and the Wall Street Journal. Sensible stuff, which is thin on the ground these days.
Of course, there are real worries about drug safety - that's the problem; I can't just sit here and say the whole idea is ridiculous. Not when the FDA is sending letters like this (PDF) to AstraZeneca. I'm not sure how long for the world Crestor is, at least at its high dose. . .the drug industry may start off 2005 in the same way that we're ending 2004. Unfestively.
+ TrackBacks (0) | Category: Toxicology
December 21, 2004
And now there's a warning about the antiinflammatory drug naproxyn. It's all over the news this morning, so you've seen the story already. I can only second what Sydney Smith over at Medpundit says:
"And if drugs like Vioxx and Celebrex cause heart disease and strokes because their biochemistry increases the likelihood of clotting, as many critics have suggested, then why would Naprosyn, which has a biochemistry detrimental to clotting, be riskier than Celebrex? Because the findings have all been based on small and insignificant numbers. The difference between 3% of people having a complication and 2% is clinically meaningless.
It's too bad that we've decided to exaggerate the risks of these drugs. If we keep it up, we won't be able to prescribe anything."
I'm not sure if the numbers are meaningless or not - if the trials are sufficiently large and well-designed, the statistical significance is there. But I am sure that if we breed an expectation of "no side effects, ever", that doctors are, indeed, not going to be able to prescribe anything. I sure won't be able to make anything that for them to prescribe.
If Drug X saves 10,000 lives a year, at the cost of killing 100 people a year who are susceptible to its side effects, what should we do? Look for alternatives, surely. Try to identify (in advance) those at risk, of course. Try to come up with a newer analog with an even better profile, naturally. Oh yeah, and sue the pharmaceutical company until they're flat on the ground. That, too.
+ TrackBacks (0) | Category: Toxicology
December 17, 2004
This afternoon the FDA released a statement about the Celebrex news from this morning. This clears things up a bit, but it means that the agency (and Pfizer) have some very hard decisions to make.
There appears to have been a clear dose dependence in the elevated cardiovascular risks. At 400 mg b.i.d. (twice a day), the treatment group showed 3.4 times the risk of the placebo group. At 200 mg b.i.d., the risk was 2.5x. The average duration of treatment was 33 months - pretty substantial, but a lot of people take Celebrex for an extended period for pain relief.
Now, Pfizer has stated that another trial at a similar dose has shown no indication of cardiovascular trouble, and that's true as far as it goes - it's a 400 mg trial, but the drug is dosed q.d. (once a day). How can 400 mg act so differently, taken all at once versus split up? My industry colleagues are already nodding their heads at the probable answer. The difference is coverage.
Coverage, as in how high the blood levels are, for how long - what pharmacologists call AUC (area under the curve) in a blood-level versus time plot. A single 400 mg dose of a typical drug will spike right up as it gets absorbed and hits the bloodstream in a big initial wave, then the line will decay down over several hours, eventually back to uselessly low levels. Ideally, that still works out to decent coverage of the drug's target for a once-a-day dose.
Taking that same drug twice a day at half the dose means that you never hit that maximum level you would in the first dosing regime, but you cover the target longer at pharmacologically active levels. Some drugs work better one way, some the other. And some drugs work differently all by themselves when they're dosed in those two kinds of regimens. It would appear that Celebrex is more worrisome when it covers its targets for longer continuous periods, rather than hitting higher levels but then going away. Perhaps in the latter case things have a chance to get back to normal before the next dose hits, but not in the multiple-dosing protocol.
If this is the case, it's going to be tough figuring out what to do. The FDA is already saying:
"Physicians should consider this evolving information in evaluating the risks and benefits of Celebrex in individual patients. FDA advises evaluating alternative therapy. At this time, if physicians determine that continued use is appropriate for individual patients, FDA advises the use of the lowest effective dose of Celebrex."
I won't mince words: to me, this looks like the first step to pulling the drug. At the very least, I think its use is going to be severely restricted. The fallout? I'll be intensely surprised if Novartis goes ahead with their planned launch of their own COX-2 drug, for one. And Pfizer is looking at some severe trouble, because they're so huge that they need a constant infusion of billion-dollar drugs just to stay where they are. I've never been able to figure out where those were supposed to come from. And I sure can't figure out how they can afford to lose one.
+ TrackBacks (0) | Category: Cardiovascular Disease | Toxicology
+ TrackBacks (0) | Category: Toxicology
December 15, 2004
I've been meaning to get around to the Ukrainian dioxin-poisoning story, and now's the time. Dioxin is, as most everyone should know by know, seriously overrated as a human poison. It's had a reputation as a scary supertoxin, though, and that's my answer to why it seems to have been used in this case: because someone was an idiot. As a tool for assassination, dioxin is the sort of thing that Wile E. Coyote would have used on the Roadrunner.
Which means the Russian KGB probably wasn't involved, because those guys know about toxins. Whatever else you can say about them, they're professionals. Trying to kill someone with dioxin is silly on several levels - for one thing, it's not very acutely toxic at all. What risks it poses are for chloracne (as we've seen) and some increase in long-term cancer rates. It's also very easy to detect analytically, if you think to look for it. Chlorinated aromatic compounds have a long lifetime in the body and a very distinct signature in a mass spectrometer.
Much of what's known about high dioxin exposure in humans comes from the 1976 chemical plant explosion in Seveso, Italy, which exposed thousands of people. While there were hundreds of cases of chloracne, there were no fatalities and no apparent birth defects in pregnant women. The compound's overall risk to human health is still being debated, mainly because the long-term data are so close to the noise level. For some details on dioxin's toxicity (from a scathingly sceptical perspective) see JunkScience's take here. For livelier, not to say near-hysterical takes on the subject, just Google the word "dioxin" and brace yourself for a flood of invective from various environmental sites
For more on the Ukrainian angle, I'd suggest reading Blogs for Industry - they've got several long posts, and a good perspective on some of the other commentary floating around.
+ TrackBacks (0) | Category: Toxicology
November 21, 2004
According to the FDA's safety maverick, David Graham, there are at least five approved drugs out there that are unsafe and should probably be pulled from the market. Roche's Accutane, AstraZeneca's Crestor, GSK's Serevent, Pfizer's Bextra and Abbott's Meridia are all on his chopping block.
Well, Graham may not feel that he has enough power at the FDA, but he sure swings some weight in the stock market. All these companies lost several per cent of their market value on Friday, taking the whole drug sector with them. What everyone wants to know is: are these drugs safe?
What a useless question. I mean it - we're never going to get anywhere with that line of thinking. "Safe" is a word that means different things to different people at different times, which is something you'd think any adult would be able to understand. The only definition that everyone would recognize, at least in part, is "presenting no risk of any kind to anyone." That'll stand as a good trial-lawyer definition, at any rate.
And by that one, not one single drug sold today is safe. Of course they aren't. These compounds do things to your body - that's why you take them - and that's inherently risky. I don't think that I'm alone in the drug industry as someone who never takes even OTC medication unless I feel that the benefits outweigh the risks, and the risks are never zero. I know too much biochemistry to mess around with mine lightly.
The drug industry, the FDA, physicians and most patients recognize that safety standards vary depending on the severity of the disease. Toxic drug profiles are tolerated in oncology, for example, that would have stopped development of compounds in almost other area. And the standards go up as additional drugs enter a market - yes, I'm talking about those evil profit-spinning me-toos. One of the best ways to differentiate a new drug in a category is through a better safety profile.
So when someone asks, "Is drug X safe?", they're really asking a whole list of questions. What are the risks of taking the compound? That is, how severe are the side effects, and how often do they occur? How do those stack up against the benefits of the drug? Then you ask the same set of questions in each patient population for which you have distinguishable answers.
To pick an example from Graham's list, Accutane has big, dark, unmissable warnings all over it about not taking the drug while pregnant or while it's possible to become pregnant. Has that stopped people? Not enough of them, unfortunately. And this is a drug for acne, which can be disfiguring and disabling, but is not life-threatening. If it were a drug for pancreatic cancer, we wouldn't be having this discussion.
The COX-2 inhibitors look much better (in a risk-reward calculation) in the patients who cannot tolerate other anti-inflammatory drugs because of gastrointestinal problems. Vioxx itself also looks a lot more reasonable in patients who are not in the higher-risk cardiovascular categories. But it (and the others in its class) have been marketed and prescribed to all kinds of people, and the fallout is just starting. How about Pfizer's Bextra? As an article in the New York Times aptly puts it:
"Another disaster in the making? No one knows. The parallels between Vioxx and Bextra are eerie. There are mounting worries about Bextra's safety, just as there were with Vioxx. Drug-safety advocates are calling Bextra a danger, just as they did with Vioxx. Pfizer, Bextra's maker, defends its drug just as Merck did. And studies of Bextra provide ammunition to both sides, just as studies of Vioxx did.
What should the F.D.A. do? The answer is as clear as mud, just as it was with Vioxx. The twin controversies demonstrate the problems that the F.D.A. routinely faces in trying to strike the right balance between the risks and benefits of prescription drugs. There is almost never a perfect answer. . ."
Strike that "almost", guys.
+ TrackBacks (0) | Category: Toxicology
October 19, 2004
The COX-2 story is continuing to thrash around, as it will for some time. The latest news has Pfizer's second-generation compound, Bextra, linked to possible cardiovascular risks. This is data from patients who have undergone open-heart surgery, so we can argue about how applicable this is to the general population, but it's still not good news. The whole drug class didn't need any more clouds over it.
At the same time Pfizer has announced a large cardiac trial of Celebrex. Pfizer's positioning this as a trial to look for cardiovascular benefits, actually, but it's going to be hard to shake the impression that they're looking for risks. It'll be interesting to see how quickly they can enroll patients in that one, although odds are we're not going to be able to find that out.
And over at Merck, they've presented data from a study of their own second-generation drug, Arcoxia. In a trial against diclofenac (a classic antiinflammation drug), Arcoxia didn't seem to show an increased risk of heart attack or stroke. But what it did show was a correlation with slightly increased blood pressure, and that's not what Merck or anyone else wanted to hear. You could, in a pessimistic mood, link increased blood pressure to long-term cardiac effects, although there's no way to know if that's what's going on yet.
Remember, the Vioxx problems didn't really show up until the drug had been on the market for some time. You'd think, to read some of the stories (or to hear some of the ads from law firms) that the drug was just mowing down its patients, but that's not what happened. Vioxx's side effects might never have been noticed in the normal course of use - although severe, they're too uncommon to pick up for sure except in a large sample. Patients had to take the drug for over a year to show any problems at all, for one thing.
Without trials specifically designed (and statistically powered) to look for them, the side effects of other COX-2 medications might be invisible. But invisibility isn't an option.
+ TrackBacks (0) | Category: Cardiovascular Disease | Toxicology
October 7, 2004
Since Merck pulled Vioxx (rofecoxib) off the market, the big question has been whether its cardiovascular problems are specific or general. Do all COX-2 inhibitors have this liability? If so, do they all have it to the same extent? The analogy that comes to mind is the statins - all of them have been shown, at high enough dosages, to be associated with a potential for rhabdomyolosis, a serious muscle side effect. But Bayer's entry into the class, Baycol, showed it more than the others and had to be pulled.
Pfizer's been saying that they have seen no evidence of trouble for their Celebrex (celecoxib). But there's an interesting perspective in the latest New England Journal of Medicine. (That link will allow you to download a free PDF of the article.) The author, Garret Fitzgerald of U Penn, suggests that there could be trouble enough to go around.
The story involves two signaling molecules formed from the COX enzymes, thromboxane A2 and prostaglandin I2. Aspirin inhibits both COX-1 and COX-2, and suppresses the formation of both of them. The thromboxane, formed by COX-1, causes vasoconstriction and platelet aggregation, while the prostaglandin causes the opposite - it dilates blood vessels and inhibits platelet aggregation.
Neither Vioxx and Celebrex touch the thromboxane A2 pathway, naturally, but they both suppress the formation of prostaglandin I2 in humans. This was a weird result when it came out, because it was assumed that this molecule was made via COX-1 too, which these drugs don't inhibit. It later turned out that it's also made by COX-2 - which at least made sense from the drug standpoint - but that was still odd, because that enzyme wasn't supposed to be in the blood vessels at all.
But it seems that COX-2 can be induced there, especially when the vessel walls are subject to shear stress from blood flow. Problem is, when you inhibit that prostaglandin's formation, you've taken off a brake to platelet aggregation and you've probably caused some vasoconstriction, too. They're trying to make more of the prostaglandin, but the drug is preventing that from happening.
FitzGerald doesn't put any sugar on it:
"We now have clear evidence of an increase in cardiovascular risk that revealed itself in a manner consistent with a mechanistic explanation that extends to all the coxibs. (Emphasis mine - DL) It seems to be time for the FDA urgently to adjust its guidance to patients and doctors to reflect this new reality. . .The burden of proof now rests with those who claim that this is a problem for rofecoxib alone and does not extend to the other coxibs."
Over to you, Pfizer! Uh, Pfizer? Oh. . .I see. . .
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Toxicology
October 3, 2004
I see that some of the Merck/Vioxx coverage has been along the lines of "Company Finally Heeds Warnings of Unsafe Drug." Boy, the tort attorneys have to love that sort of thing. It's true that Merck had some signs that Vioxx could have cardiovascular problems, but there are a lot of drugs, unfortunately, that show rumblings of this sort. Some of them turn out to be false alarms, and some of them turn out to be real. This one turned out to be real with fangs.
If we immediately pulled every drug that showed any indication of trouble, it's for sure that no patients would come to harm. But we wouldn't have very many drugs, either. It's possible that Merck could have moved more aggressively to see if Vioxx had these problems or not - but if companies immediately ran fully-powered studies to address every red light that comes on, we'd have even more enormous costs to make up than we do already. Nothing's free.
Our job, on the discovery and development side, is naturally to try to find things with the largest positive footprint and the smallest negative one. The size of the latter one never goes to zero; it can't. We try to figure out how big it is, but you can never be really sure until after the drug goes onto the market. It's sad, it's unnerving, but it's absolutely true. The mission of the FDA, in an ideal world, would be to ensure that only drugs that can cause no harm make it to market. In the world we find ourselves in, though, the mission is to balance the potential harm a new drug could cause with the good it could do. That's an awfully tough assignment.
And the job of the injury lawyers is to swoop down after the worst happens, cawing about "defective products" and "willful negligence", and bearing away the biggest chunks their beaks can carry. The sky over Merck is getting dark with them right now.
Speaking of carrying away things in beaks, remember the University of Rochester? A group there made some of the early COX-2 discoveries, and on the strength of a patent, wanted a piece of all the earnings of COX-2 inhibitor drugs. The suit failed, after years of wrangling, on the grounds that the patent did not provide any such compounds, nor did it (or could it) describe what such a drug would look like or be composed of. But if they'd won, do you think they'd be willing to pick up some of the liability? Soak up a little of the lawsuit pain? Or were they only in it for the sunny days while the money was flowing? What do you think?
+ TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Toxicology
September 30, 2004
The talk at every pharmaceutical company today was Merck's sudden withdrawal of their COX-2 inhibitor Vioxx. Merck has been having an awful time for the last year or two, and this really throws a burning tire on top of the whole heap.
They were running a study to see if Vioxx would help prevent the formation of colon polyps - evidence has been accumulating that COX-2 inhibition would be helpful in colon cancer, and Merck was going to put the idea to a rigorous test. Halfway through the three-year trial, though, things have come to an ugly halt. Not only was there no colorectal effect (at least, none so far), but the treatment group showed roughly twice the rate of serious cardiovascular side effects such as heart attacks and stroke. These doubts had followed Vioxx for several years now, after a JAMA-published analysis which seemed to suggest cardiovascular complications. But Merck contended that this earlier study was controlled against a group taking a cardioprotective drug, and therefore not sufficient evidence. That's not the case any more.
So what's this mean? Well, in the near term, Pfizer is going to rake it in with Celebrex and its successor Bextra. And Novartis will have a more open field to introduce their coming COX-2 drug Prexige. But are these problems confined to Vioxx, or is it a COX-2 mechanism effect that's going to keep showing up? As far as I know, these problems haven't been noted with Celebrex, but it may be incumbent on Pfizer to generate new data to make sure. If the drug is clean, then Pfizer gets my vote as the luckiest drug company I have ever seen, between the unexpected benefits of Lipitor and the unintentional safety of Celebrex.
Meanwhile, Merck is going to face a horrible tsunami of lawsuits. It's 10:20 EST as I write this, and when I search Google for the word "Vioxx", the first two sponsored links on the right side of the page are from tort lawyers already trolling for clients. Lawsuit-centered web domains are already active, and I'm sure that the radio ads will be on the air tomorrow. I hate to say it, but I don't see how Merck makes it through this without firing people at some point. It's a damn shame - even Merck's fiercest competitors respect their research prowess, and I hate to see the company damaged.
And in the long term? Matthew Herper has it right over at Forbes:
"In some sense, every medicine is a ticking time bomb, and existing studies may not be enough to know what is safe and what isn't. The drug development business was already risky and expensive. But it just got even worse."
Just what we needed. Man, sometimes I think I should answered that ad back in the 1980s and learned to drive the big rigs for fun and profit.
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Toxicology
July 20, 2004
Continuing on the theme of unexpected toxicity landmines, I wanted to take a look at a highly anticipated obesity drug from Sanofi. Rimonabant is a small molecule antagonist of the CB-1 receptor, and it's been getting a lot of press - both for its impressive efficacy and for its mechanism of action. The "CB" in the receptor name stands for "cannabinoid", and the drug blocks the same receptor whose stimulation causes the well-known food cravings brought on by marijuana.
Interestingly, blockade of this receptor not only seems to affect appetite, but also seems to help with cravings for nicotine. As you can imagine, the market potential for the drug could be immense (and as you can imagine, other drug companies are chasing the same biological target, too.)
But what else does an antagonist do? The receptor has, no doubt, several functions in the brain (all the CNS receptors do multiple duty), and it's scattered around in the nerves and other tissues as well. There have been a couple of reports that bear watching. A team of researchers (German/Italian/US) reported earlier this year that the CB-1 receptor seems to be involved in inflammation of the colon. Mice with the receptor knocked out show great susceptibility to chemical irritants in the gut, and (more disturbingly) the same effect was seen in normal mice treated with a CB-1 antagonist. The authors suggest that CB-1 may be involved in diseases like Crohn's and irritable bowel syndrome, but antagonists would, if anything, make the problem worse.
That's bad enough, but there's a potential disaster that just showed up last month. The authors report that a patient of theirs suddenly came down with multiple sclerosis after having been a subject in a rimonabant trial. Now, there's no way to prove causation, as they freely admit, but there's some evidence that CB-1 has a neuroprotective effect under normal conditions. So blocking its actions might conceivably expose neurons to damage, and when you combine that with the above potential role in inflammation, you have something that you should keep an eye on.
No one can say how this will play out. The most likely outcome is the best one - that the drug isn't associated with MS or Crohn's. After all, it's been through some extensive trials, and Sanofi still seems confident - which, believe me, they wouldn't be if a good fraction of the participants had come down with irritable bowel syndrome, much less multiple sclerosis. But there's another possibility, that the trouble will only show up in some patients under some conditions, and it might be rare enough that you won't see it until it gets out into the general population. There's just no way to run a clinical trial to nail down the statistics on, say, a one in 50,000 side effect. You'll never see it coming.
That MS report in particular must have the Sanofi people a bit worried, and I'm sure it has the attention of the other players in the area, who will be glad to let Sanofi go out and be the lightning rod in case anything bad happens. Odds are that it won't, but there are no sure things, not with this drug or any other. Honestly, it's years before you can relax in this business, if you ever do. Good luck, guys.
+ TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Toxicology
July 19, 2004
The PPAR family (known in the US as alpha, gamma, and delta, for obscure historical reasons) is one of those biological jungles that keep us all employed. They're nuclear receptors, and thus they're involved in up- and down-regulation of hundreds of genes. Like most of the other nuclear receptors, they do that by responding to small molecules, which makes the whole class a unique opportunity for medicinal chemists.
Normally, we can't do much about gene regulation, because it's all handled by huge multicomponent protein complexes, terrible and unlikely candidates for intervention with our drug molecules. But when the whole thing is set off by binding of a small ligand, well, that's all the invitation we need. To pick a well-known class of small ligands, the best-known members of the NR superfamily are the steroid receptors, which should give you some idea of how powerful these things can be.
For their part, the PPARs are all major players in cellular energy balance and fuel use, the handling of fatty acids and other lipids, the generation and remodeling of adipose tissue, and similar things. That lands them squarely in some very important therapeutic areas such as diabetes, obesity, and cardiovascular disease. But more recently, it's become clear that they're also involved in things like inflammation and carcinogenesis, which brings in another huge swath of the drug industry. Every large drug company is working on them, for one indication or another. Heck, you could run an entire drug company on nothing but PPAR-related targets, that is, if you weren't terrified by the insane risk that you were taking.
Problem is, the biology of nuclear receptors is powerfully complex and murky. We know a lot more about them than we did five or ten years ago, but it's obvious to everyone in the field that we still have very little idea of what we're doing. Take a look at the three PPARs: there are two diabetes drugs on the market that target PPAR gamma (Avandia and Actos, aka rosiglitazone and pioglitazone), but no one has been able to get anything significantly better or safer than either of those. PPAR alpha is supposed to be the way an old class of lipid lowering drugs (the fibrates) work, but no one's really sure that they believe that. Several companies have been working on PPAR alpha drugs for a long time now, and nothing's made it deep into the clinic yet, which isn't a good sign. And no one really knows what PPAR delta does - it seems to have something to do with lipid levels, and something to do with wound healing, and something to do with colon cancer. The clues are rather widely scattered.
I've mentioned that several companies have been working on combination diabetes drugs that would hit both PPAR gamma and alpha. The idea is that they'd do all the glucose lowering of a gamma-targeted drug, and lower lipid levels at the same time - a worthy goal for the typical overweight Type II diabetic patient. But Novo Nordisk, racing along with a compound they licensed from India's Dr. Reddy's (the evocatively named ragaglitazar) hit the banana peel when long-term rodent testing showed that the compound was associated with bladder cancer. Then Merck, which had a compound from Japan's Kyorin in advanced trials, pulled it when another rare cancer showed up in long-term rodent studies. Screeching halt, all over the industry.
Now the FDA has jumped in, with a requirement that any new PPAR drugs go through two-year rodent toxicity testing. That's an unusual requirement, but (as the two examples above show) it's something that companies were already doing on their own initiative. Bristol-Meyers Squibb and AstraZeneca have already done theirs, for example, and are plowing on.
The feeling has been: no one really knows what to expect from new PPAR compounds, so you'd better test the waters extensively. The thought of putting a compound on the market that turns out - years later - to be linked to increased risk of something like bladder cancer is enough to give everyone nightmares. I should mention that nothing bad has been seen from the two marketed PPAR gamma compounds I mentioned. But everyone remembers that there was another one, troglitazone, the first to market and the first to be pulled. It showed liver toxicity, but that seems to have been compound-related rather than mechanism-related.
Here's an article from Forbes on the subject, one of the few outlets that covered this story in any detail. It's pretty good, although it glosses over a lot of things. For example, the article quotes Ralph DeFronzo of UT-San Antonio saying that the fibrate drugs have been targeting PPAR-alpha for years, so why is the FDA worried about that subtype? What that ignores is that the fibrates are actually very weak drugs at alpha, which is why I mentioned the doubts people have about the whole mechanism. The drugs being developed now are thousands of times more potent. And look at the alpha-gamma combinations: why did all the trouble start only when alpha was added to the mix?
Well, we've got plenty of work to do. Unraveling the biological effects of the PPARs is going to take many, many years. And we're going to have to do it in rodents, in dogs, and in humans, at the very least - all the major species that are tested for toxicity. We already know about some significant differences between the species in the way that these nuclear receptors work. Will these cancer problems be another one? Are humans going to be just fine? Or will we react in even worse ways, given enough time? We just don't know. Everyone's holding their breath, waiting to see what comes next. . .
+ TrackBacks (0) | Category: Cancer | Diabetes and Obesity | Drug Development | Toxicology
May 19, 2004
If you want to fake it and pass yourself off as a drug discovery scientist - which will cause the velvet ropes to just disintegrate at all the exclusive clubs - then one phrase you can drop is "TI". As in "We need to get the TI up for that", or "What's their TI?" It stands for "Therapeutic Index".
That's just the ratio between the toxic dose of a substance and the medically effective dose. Of course, those can be rather contentious terms, and some arguing goes on during drug development about where to draw the lines. But usually both of them are determined through testing a broad range of doses, and finding out what the dose is to get your desired response in 50% of the animals tested (the ED50) and, at the high end, doses that show the the corresponding onset of tox symptoms and your TD50. A ratio of the TD50 to the ED50 is the classic therapeutic index. More technical details (PDF) are here.
Things get hairy when the efficacy and toxicity start varying between animal models. Sometimes there's more than one kind of toxicity, with different effects that show up in different species, or sometimes it's just because one type of animal is just more sensitive. (Dogs, for example, are famously sensitive to cardiovascular side effects.) If one species shows a nasty TI, you'd better be able to explain it, and explain why you think it isn't relevant to human trials, or your development group isn't going to pick up the phone.
So what's a good TI? Depends on the disease. A value of 2 is cutting things very close, and you're probably going to only get that through when treating something bad. Better to have a minimum of 5 or 10 if you can get it, and the higher the better. No one will give you much trouble with a TI on up in the double digits, unless your toxic effect, when it finally shows up, is something especially heinous.
As you'd guess, the oncology field is famous for narrow TIs, thus all the careful clinical titrating of chemotherapies. Some other classic drugs with rather narrow windows are lithium carbonate for depression and coumadin (warfarin) for blood pressure. Interestingly, aspirin has a narrow TI for an over-the-counter medicine, what with all the gastric and platelet-inhibition effects. While it's a great drug, I really doubt that it would have been developed under current conditions, at least not as we use it now. Is that good (we're safer now!) or bad (how many good drugs are we missing!)
+ TrackBacks (0) | Category: Toxicology
May 17, 2004
I don't know if everyone has been following the comments that are starting to accumulate around here after my posts, but there are some interesting ones. In response to "By Any Other Name", below, I had a report that "a well-known organic chemistry professor" continues to taste various small molecules from his lab.
The person leaving the comment was clearly referring to Nobel-winning professor Barry Sharpless, a famous and very imaginative chemist indeed. I have some points of contact with people from his lab, so I investigated. And by gosh, it's true: Sharpless apparently does taste many newly synthesized compounds, a habit that's been remarked on before: "I taste many chemicals that I make today still. That's not normal. But I'll smell almost everything, even if it's dangerous.''
In sampling compounds, Sharpless seems to observe a rule that (as one witness told me) "anything without a nitrogen in it is probably safe." Before anyone writes to point it out, it's true that mustard gas doesn't have a nitrogen in it, but we can assume that Sharpless is sharp enough to avoid such reactive compounds!
It's not a bad distinction, overall. It's not good enough to make me extend my own tongue, but if you were forced to taste compounds based on one simple rule, you could do a lot worse. You avoid sampling all the alkaloids that way, which is sound advice. Almost all compounds active in the central nervous system have a nitrogen in them somewhere, too, and that's another class of unknowns I'd step aside for.
But the no-nitrogen rule isn't foolproof. There are some mighty foul terpenoids out there, put together with nothing but good old carbon, hydrogen, and oxygen. I'd offer up the carcinogenic phorbol esters as an example. Just looking at the structures, you wouldn't guess that they're as bad as they are. There are plenty of marine natural products that will degrade you, too: the brevetoxins and aplysiatoxins are a real sensation on the tongue, no doubt. And fungi can take you out without nitrogens, no problem at all, as witness the aflatoxins and the hideous trichothecenes.
Granted, Prof. Sharpless probably doesn't take in much of a compound when he gives it his taste test. But I still think I'll leave him to it; he can tell us if he comes across anything interesting. De gustibus non disputandum est. I certainly hope he doesn't poison himself, not least because I find his current work quite interesting. . .
+ TrackBacks (0) | Category: Life in the Drug Labs | Toxicology
April 19, 2004
I had a question recently about why some chemical elements don't appear much in pharmaceuticals. Boron was one example - the first boron-containing drug (Velcade, from Millennium) was approved just recently.
But it hasn't been for lack of trying. Starting in the 1980s, several drug companies took a crack at boronic acids as head groups for protease inhibitors. Big, long, expensive programs against enzymes like elastase and thrombin went on year after year, but no one could get the things to quite work well enough. In vitro they ruled - a good boronic acid is about as good as an enzyme inhibitor can be. But in vivo they had their problems, with oral absorption and cell penetration leading the way.
As far as I'm aware, there's no particular tox liability for boron. Things like boric acid certainly don't have a reputation for trouble, and we don't take any special precautions with the air-stable boron compounds in the lab. It'll be hard to make any case, one way one another, based on the Velcade data, since the drug's mechanism of action (proteosome inhibition) has a lot of intrinsic toxicity anyway. (There's the anticancer field for you - there aren't many other areas where a target like that would even be considered.)
I think self-censorship is why there aren't more boron-containing structures out there. We don't spend much time looking at the compounds seriously, because everyone knows the problems with boronic acids, and no one wants to be the first to develop a different boron-containing functional group, either. "Why be the first to find a new kind of trouble?", goes the thinking. "Don't we have enough to worry about already?"
+ TrackBacks (0) | Category: Odd Elements in Drugs | Toxicology
April 6, 2004
This morning brings the news, via ABC, that the recently discovered bomb plot in London involved a quantity of osmium tetroxide. That's a surprise.
I know the reagent well, but it's not what anyone would call a common chemical, despite the news story above that calls it "easily obtained." It's quite odd that someone could accumulate a significant amount of it, and it's significant that anyone would have thought of it in the first place. It's found in small amounts in histology labs, particularly for staining in electron microscopy, but that's generally in very dilute solution. If these people had the pure stuff, well, someone's had some chemical education, and probably in my specialty, damn it all.
The reagent is used in organic synthesis for a specific (and not particularly common) reaction, the oxidation of carbon-carbon double bonds to diols. I've done that one myself once or twice. OsO4 comes in and turns the alkene into a matched pair of alcohols, one on each carbon, and it stops there. Other strong oxidizing reagents can't help themselves - they find the diol easier to attack than the double bond was, and go on to tear it up further. There was a recent paper in the literature on the mechanistic details, actually, going into just why the osmium reagent stops where it does.
Unfortunately, the alkenes it could attack are unsaturated fatty acids and such, as found in lipoproteins and cell membranes. Exposed tissue is vulnerable. Breathing a large amount of the vapor can kill a person through irritation of the lungs, but it's not as bad that way as the better-known agents like phosgene. A bigger problem is the cornea of the eyes, and the reagent is mostly feared for its ability to bring on temporary (and in some cases, permanent) blindness.
There's no doubt in my mind that any terrorist with the stuff was going for that effect. Could it have worked? Well, it's a solid at room temperature, but a hot day will melt it. The stuff sublimes easily; it has a high vapor pressure. Just being around the solid crystals is enough to get you overexposed to the vapors. I don't know how much of the reagent these people had, but I tend to think (again, contrary to the ABC story) that an explosion would have dispersed it to the point that it was just down to irritant levels. I wouldn't want to find out, though.
If they were planning to use it in a non-explosive gas attack, that's another matter. But the vapors are said to be very irritating, with a distinctive chlorine-like smell - which I cannot verify, thank God. It's not like no one would have noticed that there was some nasty chemical in the air. I think that they could have done some damage, certainly. But what disturbs me more than the reagent itself is the thinking behind it. . .
| Category: Chem/Bio Warfare | Current Events | Toxicology
March 10, 2004
There's some fresh news in the (quite possibly endless) debate about the vaccine preservative thimerosal. The Institute of Medicine is working on another report, due in several months. Their last report, in 2001, found no evidence to support a link, but didn't dismiss the possibility, either.
I've written about this topic before. My belief, then and now, is that autism and thimerosal are very unlikely to be related. I haven't seen any data that make me lean the other way, and the evidence against a link has continued to pile up. One of my objections to the hypothesis has been that it's hard to rationalize, mechanistically. Mercury compounds are certainly neurological bad news, but autism hasn't generally been noted as a symptom of developmental mercury exposure. (There's a different set of effects, instead.) It's hard to come up with an explanation for why thimerosal's effects would be different and specific, while still partaking of the general toxicity of mercury compounds. (Why no rising epidemic of, say, cerebral palsy?) And there's the matter of the low dose, too.
But now there's a paper in Molecular Psychiatry by a team of researchers (from Northeastern and several other schools) which suggests a mechanism. They're looking at the synthase enzyme that produces the amino acid methionine, which is an important source of methyl groups for other enzymatic systems. DNA methylation is particularly important in gene expression, and many cellular growth factor pathways seem to have a methylation requirement in them as well. They've found that thimerosal is a powerful inhibitor of two particular growth-factor driven methylation reactions, with an IC50 of about 1 nanomolar.
Single-digit nanomolar is the kind of inhibitory constant that we look for in a new drug, too, so it's certainly plausible that that could have effects in vivo (if the pharmacokinetic behavior - blood and tissue levels of the compound - go along.) The paper points out that the ethylmercury blood levels produced by thimerosal-containing vaccines are in the 4 to 30 nM range, which is enough of a multiple of the IC50 to keep the hypothesis going (but see below.) So is this the proof against thimerosal, or not?
Well, on one level, this could answer some of my mechanistic objections. But I still have the same questions as before. This could be a refinement, but not enough by itself to establish a link. We're still in the same place: It's not that I can't imagine that thimerosal could be toxic, it's that I have trouble with it being toxic in just such a way as to produce only autism. Methylation pathways are ubiquitous - how does such a specific phenotype show up from this? (More such scepticism from an immunologist at McGill is here.
The authors do suggest some possible answers. Perhaps some individuals have a less robust methylation system than others, especially in specific brain development pathways, and are thus predisposed to thimerosal-induced damage. That's definitely a hypothesis worth investigating. If that looks like it's the case, I'll have to upgrade my take on the whole idea, subject to the 800-pound-gorilla in the last paragraph below.
And they suggest some limitations to their work. For one thing, they're working on cultured cell lines, which are tumor-derived and may well respond quite differently than primary cells in vivo. They also point out the potential complication that their cells have not fully differentiated into neurons. It's responsible of them to mention these factors.
I also wonder what the hit rate is in this assay - if we run a few thousand natural products or other environmental-exposure compounds through, how many are positive at nanomolar levels? I could also add that the ethylmercury blood levels they quote might not mirror the levels in the brain. That's been a battleground in the whole thimerosal debate, because we don't have definitive mercury pharmacokinetics in the brains of human children (and I sure can't think of an acceptable way to get 'em, either. The methods we use for that kind of data in rodent studies are clearly not going to apply!)
But my biggest objection to a thimerosal/autism link is epidemiological. So far, there seems to have been no change in the autism rate in response to the discontinuation of thimerosal. Data continue to be collected, of course, but there's no apparent connection, even from countries that eliminated the compound before the US did (and thus have a longer baseline.) The real-world numbers trump any amount of biochemical speculation. That goes for my own ideas as well, as my research projects demonstate to me regularly.
| Category: Autism | Toxicology
February 18, 2004
Everyone in the industry would like to do something about the failure rate of drugs in clinical trials. It would be far better to have not spent the time and money on these candidates, and the regret just increases as you move further down the process. A Phase I failure is painful; a Phase III failure can affect the future of the whole company.
So why do these drugs fall out? Hugo Kubinyi, in last August's Nature Revews Drug Discovery suggests that it's not for the reasons that we think. As he notes, there are two widely cited studies that have suggested that a good 40% of clinical failures are due to poor pharmacokinetics. That area is also known in the trade as ADME, for Absorption, Distribution, Metabolism, and Excretion, for the four things that happen to a drug once it's dosed. And we have an awful time predicting all four of them.
Of the four, we have the best handle on metabolism. In the preclinical phase, we expose compounds to preparations from human liver cells, and that gives a useful guide to what's going to happen to them in man. We also expose advanced compounds to human liver tissue itself, which isn't exactly a standard item of commerce, but serves as a more exacting test. Most of the time, these (along with animal studies) keep us from too many surprises about how a compound is going to be broken down. But the other three categories are very close to being black boxes. Dosing in dogs is considered the best model for oral dosing in humans for these, but there are still surprises all the time.
That 40% figure has inspired a lot of hand-wringing, and a lot of expenditure. But Kubinyi says that it's probably wrong. Going back over the data sets, he says that the sample set is skewed by the inclusion of an inappapropriately large group of anti-infective compounds with poor properties. If you adjust to a real-world proportion, you get an ADME failure rate of only 7%.
Now, when this paper came out, I think that there was consternation all over the drug industry. (There sure was among some of my co-workers.) The ADME problem has been common knowledge for years now, it was disturbing to think that it wasn't even there. So disturbing, it seems, that many people have just decided to ignore Kubinyi's contention and carry on as if nothing had happened. There have been big investments in ways to model and predict these properties, and I think that many of these programs have a momentum of their own, which might not be slowed down by mere facts.
The natural question is what Kubinyi thinks might be our real problem. In his adjusted data set, 46% of all failures result from lack of efficacy in Phase II. He admits that some of these (in either approach to the data) might still reflect bad pharmacokinetics, but still maintains that poor PK has made a much smaller contribution than everyone believes. Here's his drug development failure breakdown, which makes his point:
46% drop out from lack of efficacy
17% from animal toxicity (beyond the usual preclinical tox)
16% from adverse events in humans
7% from bad ADME properties
7% from commercial decisions
7% from other miscellaneous reasons
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology
December 30, 2002
After a (reasonably) refreshing holiday break, Lagniappe is back. Thanks to everyone who kept doggedly hitting this site during the last few days - I admire your persistance.
I notice from my site's counter that I get a small but steady flow of Google hits for various miracle cures. I said some nasty things about the Budwig flaxseed-oil diet a while back, for example, and I still get Googled for that one. For those visitors, here's a post that (with any luck) will show up for a long time to come.
To put it in one sentence, distrust simple cures for complex diseases. Cancer is a complex disease, so are arthritis, MS, Alzheimer's and diabetes. What's a simple disease? An infectious one: there's a proximate cause, and a path to cure it. Get rid of the bacteria, and your septicemia goes with them. Clear out the parasites, and no more malaria. (You'll note that we don't have a universal malaria cure yet, which should say something about how hard even the simpler diseases are.)
The really tough ones, though, are all things that originate from some misfiring of the body's own systems. It's true that there are single-gene diseases, which would be simple to treat if we only knew how to get gene therapy to work. Most of them are rarities, diagnostic zebras that many physicians will never see. The ones that every physician sees are multifactorial and very hard to deal with.
I've spent a lot of time on this site talking about autism recently, and there's a common factor. I believe that many diseases only look like single conditions, which turn into dozens of other diseases on closer inspection. There's no such disease as "cancer," for example. Cancer is the name we sloppily apply to the end result of dozens, hundreds of metabolic or genetic defects and breakdowns, all of which end up as vaguely similar cell-differentiation diseases. It wouldn't surprise me if Alzheimer's ends up as something that can be caused several different ways, all of which end up in the same alternate low-energy state for the brain's metabolic order. (I speculated on this back in the first month of this blog's existence.)
And autism, too, could well be the name we're giving to several different diseases, distinguished by their time course, onset, and severity, caused by all sorts of intricate interplay - the wrong chord played on the instrument at just the wrong time.
You can, at times, find single factors that lead into these diseases - a compound called benzidine leads to bladder cancer, for example, although not in every person exposed, and at unpredictable exposures over unpredictable times. But that doesn't mean that everyone who has bladder cancer has been exposed to benzidine - not many people ever are these days. And stomach cancer, for example, has nothing to do with benzidine at all. Even the simple cases aren't too simple.
Remember the power line scare? How those electromagnetic fields from high-tension lines were messing up everyone's lives? You could see stories about how power-line exposure had been linked to brain cancer, to kidney cancer, to skin cancer. The problem was, one study would show a barely-there tenuous link to brain cancer - but not to anything else. Another would show the same wispy possible connection to kidney cancer - but not to anything else. And so on - after looking over all the data, the best conclusion was that this was all statistical noise. Beware statistical noise - that's another long-running theme around here.
Epidemiology hasn't been a simple field since the days of yellow fever, if it even was then. And medicine hasn't been a simple one since the first days that ever counted. As time goes on, we're clearing out more and more of the easy stuff. The really hard stuff is what's left, and it's going to be resistant to simple fixes.
+ TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Toxicology
December 22, 2002
While I'm on the subject, I'll mention some details that will be familiar to my fellow medicinal chemists. The body has a lot of mechanisms to deal with foreign substances. We assume that all our drugs are going to be handled by them, one way or another, and we just try to keep the stuff around long enough to work. (And that's usually the case - once in a while you'll come across a compound that binds to some protein so tightly that it doesn't disappear until that population of protein molecules is metabolically recycled, and that's bad news. Altering a protein that permanently can cause more effects than you're looking for, the worst of which is setting off an immune response.)
The two sorts of systems that clear out unrecognized molecules are divided into Phase I and Phase II enzymes. The Phase I crowd rips into anything that fits into their active sites, which are biased toward greasy structures, and tears the molecules up in ways to make them more water-soluble. The so-called P450 enzymes in the liver are the big players here, and they can oxidize just about anything that comes their way. If the newly torn-up substance is sufficiently water-soluble, out it goes into the urine next time it hits the kidneys.
If it isn't, it hangs around long enough for a Phase II enzyme to get ahold of it. These attach what are basically disposal tags to molecules, groups that are made to be pulled out of the blood and into the urine. A sugar called glucuronic acid is a common tag, and another molecule called glutathione does a lot of this, too. Sometimes Phase II pathways are the main way a drug is eliminated - depends on its structure. One way or another, the body finds a way to get rid of things.
And if it doesn't, that can mean trouble. If the body's exposed multiple times to something that isn't cleared well, the stuff can accumulate in one tissue or another. And while that's not always harmful, there's no way it can help, either. This is the problem with heavy metals like lead or mercury. They're not the sort of thing that can fit well into the P450 enzymes, and they're generally already oxidized as far up as they can be, anyway. Compared to drugs, they're handled poorly. Metals are often excreted bound to sulfur-containing proteins, or as the salt with cysteine itself (the amino acid unit that contains the reactive SH group.) There's a lot of oxidation-reduction chemistry involved - a dose of a metal salt may end up being excreted as the reduced element itself.
That takes us back to tonight's thimerosal theme. The recent Lancet study found that most of the mercury was eliminated through the GI tract, mostly as inorganic (elemental) mercury. The faster it gets converted to that, the better, I'd say, because doses of mercury metal itself are virtually harmless. It's just too insoluble to get into trouble. They also found much quicker excretion than they expected, which was good news, too (as I mentioned on December 4, below.) The follow-up study is going to look at the early pharmacokinetics of thimerosal, to see if there's some sort of maximum-concentration spike that's being missed.
As has been pointed out, if thimerosal is a cause of autism (I'll reiterate that I doubt it,) then it only happens in a few children. So the criticism can always be made that a small study would likely miss testing the sort of child that might be affected. This is true, but if the levels are low, with little variation in the individuals studied, then you have to assume that there's a subpopulation that is quite different. If you have to string together enough assumptions, then you start to rule out the hypothesis - that's as close to proving a negative as you can get in science. We'll see what the numbers have to say.
+ TrackBacks (0) | Category: Autism | Toxicology
December 4, 2002
One reason that I have doubts about thimerosal as a cause of autism goes back to mechanism of action. Are there any specific compounds that are know to cause specific neurological problems? (There are plenty that cause more diffuse symptoms, often motor-related, such as tardive dyskinesia.)
Well, there's one prominent example: MPTP, known to the trade as 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine. It's a reasonably simple organic molecule, and to a medicinal chemist it certainly looks like a central nervous system agent (if I had a dollar for every CNS-active piperidine or piperazine that's been reported in the patent literature, I could retire.) But no one could have predicted what it actually does.
The compound gets oxidized by monoamine oxidase B, which is a common fate for molecules of its type. That produces a pyridinium compound which is the real problem. As fate would have it, it's a fine substrate for the dopamine transporter protein, which imports dopamine into cells that require it. And in a further stroke of bad luck, the same compound is also an inhibitor of a key enzyme in mitochondria - and you don't want to do anything to your mitochondria. Cell death follows in short order if you shut them down too hard.
So everything's set up for a disastrous cascade: MPTP's turned into something dangerous, which is taken up selectively into cells that import a lot of dopamine, which process then kills them. Unfortunately, the cells that import the most dopamine are those in the substantia nigra, up in the forebrain. Which is why in the late 1970s and early 1980s, a number of young drug users started showing up in emergency rooms on the west coast with what appeared to be some sort of catatonia. They didn't move; they weren't responsive - everyone waited for whatever it was to wear off so they could start to recover.
It didn't, and they didn't. At first, no one recognized what was going on, mainly because no one had ever seen a twenty-year-old with advanced Parkinson's disease before. These patients had taken batches of some sort of home-brewed meperidine (better known as Demerol) or a derivative, and the synthetic route had produced some MPTP as a contaminant. Quality control isn't a big feature of the basement drug industry.
The affected users improved slightly when given L-Dopa, as you'd expect from a Parkinson's patient. But not much, and not for long. The damage is permanent - they skipped years of the normally slow progression of the disease and went straight to its worst phase in one night. Is this what's happening with thimerosal and autism?
I strongly doubt it. Here's why: Parkinson's is caused by a lesion in a specific area of the brain, in a specific (and unusual) cell type. MPTP is toxic to some specific and unusual cell types, and it's just a terrible stroke of misfortune that they happen to overlap. But despite a tremendous search, no one has been able to tie autism to primary lesions in a specific region of the brain, much less down to certain cells. I'm not saying that it's impossible - just that it's been looked for strenuously, and thus far in vain. Studies of brain activity in autistic patients show a variety of differences, but nothing that can be pinned down as a cause.
The other half of the story is the reactivity of thimerosal itself. There's nothing known about the compound that would suggest that it has a particular affinity (or particular toxicity) to any one type of cell over another. Organomercury compounds are (in high doses) pretty bad news in general, causing all sorts of neurological problems. They just don't seem to be specifically toxic.
So there's no evidence, mechanistically, on either side of the hypothesis. That doesn't disprove it, of course - it's not impossible that there would be some sort of subtle effect that we've missed so far. It's just that I believe that the odds are very much against it. We'd have to string together too many (big) assumptions in a row, and the evidence isn't nearly compelling enough to make us do that.
If thimerosal is cleared as a possible agent for autism, that'll be good news and bad news. The good news is, of course, that we haven't been damaging children without realizing what we're doing. The bad news will be that we still won't know why some children become autistic and others don't, a lack of knowledge that's hard to bear.
The only other good news I can think of - and a hard, sour piece of good news it is - would be that parents of autistic children who have feared that they were the cause of their children's condition - just by having them vaccinated - could at least put that part of their burden down. It's not enough, but it's something. Believe me, I have two small kids myself, and the thought of either of them showing signs of neurological trouble makes me start to double over. I can't even imagine what it must be like. But to those in that situation, all I can say is that I really don't think that some doctor did it to your child. Or that some drug company did it to your child. Or that you did, either. For what it's worth.
+ TrackBacks (0) | Category: Autism | The Central Nervous System | Toxicology
October 31, 2002
There's another report this morning of an arrest of a suspected Chechen terrorist, who was carrying what's described as 18 pounds of mercury in a champagne bottle. ""Such an amount of mercury would poison a very large number of people," said a spokesman for the Moscow police.
Would it? The amount is right - 18 pounds of mercury works out to about 600 mL, which would fit just fine into a bottle. But what could you do with it? Mercury, in its elemental form, is a very, very slow toxin indeed. You can even drink a shot of the stuff and pass it out of your body without getting killed. It won't improve you, that's for sure, but it won't kill you.
If you try that stunt, your body (or your intestinal bacteria) will take a very small amount of the metal up and convert it to organomercurial compounds, which are the real problem. Those are much more easily absorbed than the pure metal, and can really do some damage. (These are the forms of mercury found in fish, for example - it's not the free metal.) Mercury reacts with sulfur-containing proteins, among other things, and there are plenty of proteins that depend on sulfur for their structure and activity. You can't afford to lose 'em. Long-term exposure to mercury vapor (which the liquid metal is always producing, very slowly,) gives you the best (ie, worst) chance to absorb the element. That's how mercury's toxicity was first noticed, but this can take months or years to develop as the protein damage piles up.
Now, if this guy had been carrying a few pounds of something like dimethyl mercury, then things would be different. That's one of the simplest organomercurials, and it is extremely bad news indeed. Just a few years ago, a research chemist at Dartmouth was when a few drops of this compound fell onto her latex-gloved hand. It penetrated the glove, then her skin. She didn't notice a thing; nothing seemed amiss for several months. Then neurological symptoms rapidly began to show up, and she died within weeks.
That's about as bad as mercury compounds get, and it still takes time for it to kill you. This Chechen probably thought he was carrying a serious poison, but he was mostly hauling around a rather expensive barbell. Here's hoping he paid a lot of money for it.
+ TrackBacks (0) | Category: Chem/Bio Warfare | Toxicology
July 30, 2002
Back to the question: what does the Ames test tell us? One thing it does is something that all toxicological tests do - that, as Paracelsus put it, "the dose makes the poison." There's hardly a more important tox principle than that. You can get a lot of things to show positive for mutagenicity if you're willing to load up on them.
Beyond that key point, Ames himself has made the argument that synthetic compounds and naturally-occuring ones have the same hit rate in these assays. Plants have evolved a variety of pesticides and antifeedant compounds, many of which are reactive and toxic at some level - therefore, most (as in 99.99%, according to his estimate) of the pesticides in the human diet are those found in the plants themselves. The cruciferous vegetables (broccoli, cabbage, mustard and so on) are particularly rich in compounds that will light up an Ames test. A fine article of his from 1990 (Ang. Chem. Int. Ed.,29, 1197) states that ". . .it is probably true that almost every plant product in the supermarket contains natural carcinogens."
And that's before cooking. Many of these reactive compounds are destroyed by heating, but many others are formed, especially in browning or charring of proteinaceous foods. There are two ways to react to news like this: either you can panic at the thought that every meal you take is full of mutagens, or you can decide that (since people aren't dropping all around you) that we've apparently got some method of dealing with them.
That we do: the digestive processes, gut and liver especially, the same things that are the bane of medicinal chemists for tearing up our carefully-designed wonder drugs. They give the same treatment to most everything you eat. In most cases, they're successful at detoxifying whatever compounds might be present, even if they were at harmful concentrations. But there's a limit - if you chow down on plants containing cyanogenic glycosides (raw cassava root, apricot pits, etc.,) nasty amino acids (some kinds of Lathyruspeas,) or fluoroacetic acid (some South African weeds,) then not much is going to help you.
Ames's point is that the mental division many people have between "artificial" or "synthetic" chemicals (bad) and "natural" ones (good) is nonsense. The same number of toxic compounds are found in each category, and we're exposed to far more of the latter. Instead of worrying about parts-per-billion of pesticide residues, we should worry about greater public health risks like smoking, alcohol, etc. Going crazy about the minute amounts of synthetic compounds that we can now detect not only diverts time and money from more useful concerns - it can lead to decisions that end up doing more harm than the compound residues ever could. Ame's article is a fierce broadside against this sort of thinking.
If you're going to sound the alarm about chemicals, he suggests, look at high-dose occupational exposures. Here we get into toxicity that has less to do with a compound's mutagenic potential. At very high doses, you're basically causing cell death, irritation, and tissue injury. That leads to increased rates of cell division, leading to an increased chance of carcinogenesis.
We're back to "the dose makes the poison." The principle applies not only to people who are exposed to huge doses of chemicals, but to unlucky lab rats as well. Ames has forcefully made the point that testing compounds in animals at or near their maximum tolerated dose (MTD) is a poor measure of their cancer-causing potential. About half the compounds so tested show up as carcinogens, but the dose-response curves aren't linear. It's a complete mistake to assume that half of all chemicals cause cancer, unless you're soaking your feet in solvent while doing ice-cold shots of fungicide.
The implications for the toxicity testing of pharmaceuticals? We don't usually test our drugs at such high levels in chronic studies. Instead of working down from the MTD, as an environmental toxicologist might, we work up from the MED, the minimum efficacious dose. If a compound makes it through OK at some multiple (10x, 50x, 100x) of the MED, then we feel safe enough to go on.
As for Ames testing of pharmaceuticals, since we also don't go to such high levels, we don't see that many true positives. There are some known pitfalls: antibiotics can be tricky to assay, as you'd guess, since the procedure uses bacteria. Some of the drugs that target DNA-manipulating enzymes (like the fluoroquinolones) will give you a false positive because of the way the bacteria have been crippled for the test.
Since a real Ames-positive is uncommon in the drug industry, we pay attention when we get one. Would these compounds really be mutagens in humans? And if they were, would they be carcinogenic? Maybe not! But for the most part, no one knows, and no one's going to find out, either. It takes a lot of nerve to continue developing such a compound, and there really aren't enough data points to draw a conclusion.
It's the same way with animal tests. If something serious happens with your whole-animal tox (especially if it happens across species,) you usually pack it in and cut your losses. No doubt some of these compounds could have gone on safely, but we'll never know. At least not until we're a lot better at this than we are now. . .
+ TrackBacks (0) | Category: Toxicology
July 29, 2002
One hears a lot about the Ames test (as a measure of carcinogenicity and other Bad Things.) It's sometimes held up by animal-rights types as a model of the sort of testing that could be done if, presumably, we weren't all so much into torturing the lesser species. I thought a look at the test would be worthwhile. This is another post where many of my readers in the industry will know the material, but since almost no one outside of it does. . .
The original Ames test dates to the mid-1960s, and the idea behind it is even older. First, you need some mutated bacteria, in this case mutant Salmonella bacteria that have lost the ability to produce their own histidine (an essential amino acid.) These guys can be grown on a medium that contains histidine, but if you try to grow them on something else, they'll die. Unless, of course, they mutate back to being able to make it on their own, and that's the basis of the test in a nutshell.
The original protocol was worked out after a long search for Salmonella mutants - some naturally occurring (although not for long!) and others induced by radiation or exposure to toxins. The modern form uses some engineered bacteria as well. There's a panel of standard strains, with different types of defects in the gene that's responsible for histidine production. Several also have hand-introduced glitches in their best DNA-repair enzymes. Losing those force the bacteria to use more low-tech cellular machinery to do the job, increasing the chance of random mutations. Finally, all the Ames test strains have defective polysaccharide outer coats, to make them more permeable to the test chemicals. These are not exactly the most robust Salmonella you'll ever see (but robust Salmonella would laugh at most of our test chemicals, which wouldn't be too informative.)
There are a number of different ways to run the test, but they all come down to this: you expose your bacteria to the test substance, and try to grow them without their required histidine. That really puts the selection pressure on - stumble back to making your own histidine, or die! So anything that survives is a mutation, and the more of those you get, the worse your test substance was. That's because the mutation occurred while the DNA was being repaired or replicated. Your test compound either caused the damage that had to be repaired, or it caused the replication process to fumble.
An added wrinkle involve taking your compound and exposing it to liver enzymes first, then taking that mix and running the Ames on it. That can pick up toxic metabolites that might form in vivo(not all of them, but it's a start.) Drug companies routinely do it both ways, just to be sure.
So that's an Ames test. What does it mean? Here's where the arguing starts, but there are some agreed-on facts: A dose-responsive positive in means that your compound damages DNA. There are a lot of ways for that to happen, and the behavior of the different Salmonella strains can point toward what kind it probably was. As you pile up the DNA damage, odds increase that you'll finally bollix some gene that shouldn't have been messed with, one that can start a cell on to road to cancer.
And that's what the test has traditionally been used for, as a proxy for carcinogenic potential. It's not a good surrogate for general toxicity, because there are plenty of toxic compounds that don't work by damaging DNA. Nerve gas, for example, would probably pass an Ames test, although I'm pretty sure that no one has been insane enough to actually take a look. Long-term carcinogenic potential is not a big issue for the typical nerve gas user.
Here come the harder questions, though: are all carcinogens positive in the Ames test? Do all the things that light up an Ames test cause cancer? And at what level of positive activity should you start to get worried? Bruce Ames himself started to wonder about these questions as the years went on. The answers were a bit surprising. . .(to be continued.)
+ TrackBacks (0) | Category: Toxicology
July 28, 2002
Another question I've had posed to me is whether the FDA standards for drug approval are too tight (no one who writes to me seems to worry that they might be too loose, although you can find groups who'd argue just that.)
Overall, I don't think so. There are really two sets of standards, for safety and for efficacy, and neither are really set in stone. From drug to drug and disease to disease, things can slide around - which is how it should be. Safety is an open-ended problem that has to be addressed by closed-ended regulatory solutions, and as time goes one, the bar is raised. We know a lot more that we used to about, say, QT-interval prolongation as a cardiac side effect, and that means that we have to test for it instead of crossing our fingers. If this weren't getting harder, neither the drug companies nor the FDA would be doing their jobs.
As for efficacy, I see the suggestion every so often that this requirement be done away with. I'm firmly opposed to that idea. It would open the door to even more Miraculous Herbal Tonics than we have already - all you'd need to do to get past the safety requirement is make sure your snake-oil doesn't have anything active in it. Honestly, you'd have people springing up selling powdered drink mix at $5 the glass to cleanse your liver, grow new hair, and make your genitalia go out in the morning and fetch the newspaper. What? You say we have that already? Well, now they'd be "FDA-approved" on top of it.
No, I'm all for making companies show that their drugs actually do something. In fact, I'm all for making sure that any medicine entering a served market is tested head-to-head with the competition. Of course, this is often mandated already. And companies often do it themselves so they'll have an edge in marketing - exceptwhen it's a patent extension of one of their own drugs. As I mentioned a while back, I'd make Astra-Zeneca test Nexium against Prilosec, for example, and we'd see who's fooling whom.
There's no doubt that the FDA's gotten pickier in the last couple of years, and all of us in the industry are feeling it. But it hasn't gotten to the point where I think they're stepping over the line.
+ TrackBacks (0) | Category: "Me Too" Drugs | Drug Development | Toxicology
July 25, 2002
I've had some e-mail asking if the diabetes drug I mentioned the other day is dead or not, and if not, why not. I don't have any direct contacts in the companies involved, not that they'd tell me all about it even if I did, but I can make some informed guesses. They'll illustrate what happens in these cases.
Readers in the industry will know that this situation (dramatically worse tox results in one species versus another) is a common one. You'd think that mice and rats, for example, would be pretty similar, but there are real differences at every level (from gross anatomy to molecular biology.)
To get off topic for a minute, that's one reason that I'm only partially impressed by figures showing how humans and (fill in the species) share (fill in some high percentage) of their DNA sequences. It's interesting, in one way, but the differences that do exist count for an awful lot.
Differences in toxicology between species, of course, are why the FDA (and drug companies themselves) want to see tox results from more than one species. The more, the better. Most of the time, it's rats and dogs, sometimes rats and monkeys, sometimes all three. Mice aren't considered quite as predictive a species - they're OK for rough-and-ready tox screening (and you need a lot less compound to do it that way,) but not for real decision making.
That's why I'm sure that Novo and Dr. Reddy's weren't thrilled at seeing bladder cancer in the rats, with much less of it in the mice. If it had been the other way around, the path forward might have been a little bit easier, but it'd be hard no matter what. Their compound isn't dead yet, I assume. But what it'll need to go forward is an idea of what the mechanism of the carcinogenesis might be.
Is is the parent compound causing trouble, or some metabolite? Which one? How much of it is in the urine, and how long does it stay there? As mentioned the other day, do rats make more of any of the metabolites, or are they just more sensitive to them? And, the big question once those have been answered: what do we know about how humans might behave?
If the companies have a backup compound waiting in the wings, then we can assume that it's already in intense tox trials. If it's clean, then the original drug is dead, of course, and the backup goes on, more or less as if nothing had happened. But the prudent course would be to do the work outlined above anyway, so you can use it to show why you got the clean tox results you did on the new compound. That's the only way to feel really sure.
I've had animal rights people make the argument to me that such differences in toxicity prove that animal models are worthless. Untrue, untrue. Without testing on animals, no one would have known that this compound could cause bladder cancer in any species at all. The known differences between humans and various animals can then be used to estimate the risks if the compounds proceeds.
If there were an in vitroway to determine the risk, we'd all be lining up to use it. It would, by definition, be much faster, much cheaper, and much easier to apply earlier in the project before all that time, money, and effort gets wasted. If PETA and their ilk would like to devote themselves to developing such tests, I'll cheer them on.
+ TrackBacks (0) | Category: Animal Testing | Drug Development | Toxicology
July 24, 2002
Since the Coleridge quote went over well the other day, I thought I'd return to the line above (from Hilare Belloc) to talk about why things advance slowly in the tox field. It's fear. Justifiable fear. When toxicologists find something that seems to work, they stick with it. They're not easily convinced by the latest gizmos. No one wants to be the first to rely on a new technique and have it backfire, because the consequences in patients are potentially so terrible.
Any new technology (gene chip assays, for example) has to piggyback on the existing stuff for a long time, until there are plenty of cases to show how well it correlates with the existing methods. Given the length of the drug development process, this is a matter of years, many years.
You also want to know when the new stuff might be likely to break down, so you know when to give less weight to the results. The worst thing you can have in a tox test is a false negative, because that can kill people. The second worse thing is a false positive, because that can kill drugs. There's not much room to fool around in.
+ TrackBacks (0) | Category: Toxicology
July 23, 2002
I've had some mail asking a good (and Frequently Asked) question: how good are the alternatives to animal testing? How close are we to not dosing animals to get toxicology information?
My short answer to the second question is, simultaneously, "A lot closer than we used to be" and "Not very close, for all that." The root of the problem is complexity. Toxicological properties are, to use the trendy word, emergent. You need the whole living system to be sure that you're seeing all there is to see.
You could try to mix and connect cell cultures, where the compound, after being exposed to one type of cell, then flowed off to another, and the original cells got a chance, if they'd been changed, to affect other different cell types. . .and so on. But by the time you got all the connections worked out, you'd have built an animal.
An example of a emergent tox problem is the recent withdrawal by Novo Nordisk of a clinical candidate that they were developing with the Indian company, Dr. Reddy's. Bladder cancer was the problem, seen in long-term dosing. But it's mostly a problem in rats - mice showed enough to notice, but it was the rat data that really set off the sirens.
There aren't a lot of good in vitro methods to predict carcinogenic potential. It's for sure that this compound had been through screens like the well-known Ames test for mutagenicity, for example. If it hadn't passed, it's unlikely that they would have carried the compound as far as they did. (I'll be writing more on the Ames test at a later date.)
Bladder cancer's a bit unusual. Playing the percentages, you'd have to guess that the problem isn't the compound itself, but some metabolite produced in the body which concentrates in the urine. And the rodent differences might suggest that rats produce more of this metabolite than mice do (or, alternatively, that they produce the same one, but that rat bladders are more sensitive to it.) Something like this would be the way to bet.
How much are you willing to bet, though? Are you willing to give people bladder cancer, or even put them at risk for it? (And are you willing to invite some many liability suits to land on you that you'll think it's snowing?) Your chances of getting through (and the chances of your customers!) depend on what the mechanism of the tox might be, and whether it operates in humans, as opposed to rats.
Novo and Dr. Reddy's are certainly going to take their time to thoroughly investigate what the problem might be, and whether it can be fixed. There was really no way to anticipate it without animal testing, though, since we don't have an in vitro system that mimics the bladder. Even if we did, they might have run their compound through it and gotten a green light, if the problem is in fact some later metabolic product. There's no substitute for the whole animal.
+ TrackBacks (0) | Category: Animal Testing | Toxicology
February 24, 2002
I'll continue to talk now and then about some topics that won't necessarily be news to those inside the pharma industry. I see from my traffic stats that most of the hits are from outside it, although there are increasing numbers from both my own company and the competition.
Thoughts of work prompt me to quiz non-specialists: at what point do you figure most drug projects fail? Ever thought about that one? You can be sure that everyone inside the industry has, oh yeah. There are plenty of data points to study - the sound of failing projects is this constant clanging in the background.
Failure in clinical trials get the most press attention. They're certainly bad enough to deserve it. You lose lots of candidates in Phase I, because they turned out to be toxic in normal volunteers. (Well, at least toxic in the sort of person that signs up for Phase I studies, but that's another story.) Then you lose some of those non-toxic candidates in Phase II, because they didn't work well enough, or at all. Failure at those stages is particularly expensive and frustrating; it's late in the game. And it means that your animal models turned sour on you somehow, by telling you the compound would be safe and that it would be efficacious.
Your disease models are what tell you the latter, and they vary from target to target. As I mentioned the other day, the ones for CNS ailments are particularly hairy; other fields have it easier (but not easy!) The animal toxicity tests for general safety, though, don't vary much between therapeutic areas. They're a notorious hurdle.
Tox is a long and expensive study, by preclinical standards. It usually calls for the largest batch of your drug candidate that you've made until then, and it's the usually the longest animal study you've run, too. And it makes everyone involved hold their breath, because it's a complete black box.
It really is. That's actually where more projects wipe out than any other, in my experience. We really have no idea what's going to happen. Well, you may have some clue about toxicity based on your drug's mechanism, and you're already braced for that (and hoping it cuts in at much higher levels than you need to show the beneficial effects.) But it's that non-mechanistic tox that's out there waiting for all of our projects, and when it hits all you can do is run for cover. Kidney? This protein isn't even expressed in the kidney, what the. . .liver? But the tests on hepatocytes all came back OK. . .spleen? Who ever heard of a drug showing tox in the spleen? Heck, who needs a spleen anyway? And so on.
As I've alluded to in the past, you can sell just about anything to a big drug company if you promise to do something about the failure rate. If anyone has a bright idea for how to predict toxicity before we go into animals (or, God help us, humans,) then here's your chance to cash in. The management would be thrilled at all the time and money that doesn't go down the pipe. The scale-up chemists would be thrilled not to have to make buckets of loser compounds. Even the animal-rights people would be happy.
Note that the obvious ideas have probably already been tried. But don't be afraid to ask.
+ TrackBacks (0) | Category: Toxicology