About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
August 8, 2014
I wrote here about a study that suggested that mice are a poor model for human inflammation. That paper created quite a splash - many research groups had experienced problems in this area, and this work seemed to offer a compelling reason for why that should happen.
Well, let the arguing commence, because now there's another paper out (also in PNAS) that analyzes the same data set and comes to the opposite conclusion. The authors of this new paper are specifically looking at the genes whose expression changed the most in both mice and humans, and they report a very high correlation. (The previous paper looked at the mouse homologs of human genes, among other things).
I'm not enough of a genomics person to say immediately who's correct here. Could they both be right: most gene and pathway changes are different in human and mouse inflammation, but the ones that change the most are mostly the same? But there's something a bit weird in this new paper: the authors report P values that are so vanishingly small that I have trouble believing them. How about ten to the minus thirty-fifth? Have you ever in your life heard of such a thing? In whole-animal biology, yet? That alone makes me wonder what's going on. Seeing a P-value the size of Planck's constant just seems wrong, somehow.
+ TrackBacks (0) | Category: Animal Testing
August 4, 2014
A reader sends along news that a minister at the UK's Home Office has made it his goal to completely eliminate animal testing in the country. Norman Baker has been a longtime activist on the issue of animal rights, and is now in a position to do something about it.
Or is he? Reading the article, it seems to me to be one of these "Form a commission to study the proposals for the plan" things. The current proposal is to increase the publicly available details about what animals are being used for:
In a statement, Mr Baker said: "The coalition government is committed to enhancing openness and transparency about the use of animals in scientific research to improve public understanding of this work. It is also a personal priority of mine.
"The consultation on Section 24 of the Animals in Science Act has now concluded and we are currently analysing responses in preparation for pursuing potential legislative change."
So I don't see a ban on animal experimentation in the UK any time soon - which would demolish what's left of the pharma industry there, along with great swaths of the academic biological research world as well. I am not in favor of animal suffering, and would gladly punch anyone who is. But given the state of our knowledge, there really is no alternative in many cases. We shouldn't be doing frivolous experiments, and we should all be mindful of alternatives. But the anti-testing people should realize how few good alternatives there really are.
I've found, by the way, that many activists are convinced that such alternatives are a lot more useful than they really are. When I've had a chance to press them for details, things get hazy very quickly. Phrases like "cell cultures" and "computer models" get thrown around, but how these can substitute for whole-animal disease models and toxicology - that turns out to be not so clear.
+ TrackBacks (0) | Category: Animal Testing
April 29, 2014
The difficulty of doing good animal studies has come up here many times, such as the recent suggestion that many rodent facilities should adjust their thermostats.
Now comes word of yet another subtle effect that no one has ever controlled for: mice apparently react different to the scent of human males as compared to human females. Specifically, we guys stress them out more, an effect that shows up in assays of pain and inflammation (and likely many others besides). Here's the paper in Nature Methods, and I think that anyone running rodent studies had better sit down and read it at the first opportunity. There could well be a lot of messed-up data out there, and straightening things out will not be a short job.
+ TrackBacks (0) | Category: Animal Testing
April 8, 2014
Here's an article by Steve Perrin, at the ALS Therapy Development Institute, and you can tell that he's a pretty frustrated guy. With good reason.
That chart shows why. Those are attempted replicates of putative ALS drugs, and you can see that there's a bit of a discrepancy here and there. One problem is poorly run mouse studies, and the TDI has been trying to do something about that:
After nearly a decade of validation work, the ALS TDI introduced guidelines that should reduce the number of false positives in preclinical studies and so prevent unwarranted clinical trials. The recommendations, which pertain to other diseases too, include: rigorously assessing animals' physical and biochemical traits in terms of human disease; characterizing when disease symptoms and death occur and being alert to unexpected variation; and creating a mathematical model to aid experimental design, including how many mice must be included in a study. It is astonishing how often such straightforward steps are overlooked. It is hard to find a publication, for example, in which a preclinical animal study is backed by statistical models to minimize experimental noise.
All true, and we'd be a lot better off if such recommendations were followed more often. Crappy animal data is far worse than no animal data at all. But the other part of the problem is that the mouse models of ALS aren't very good:
. . .Mouse models expressing a mutant form of the RNA binding protein TDP43 show hallmark features of ALS: loss of motor neurons, protein aggregation and progressive muscle atrophy.
But further study of these mice revealed key differences. In patients (and in established mouse models), paralysis progresses over time. However, we did not observe this progression in TDP43-mutant mice. Measurements of gait and grip strength showed that their muscle deficits were in fact mild, and post-mortem examination found that the animals died not of progressive muscle atrophy, but of acute bowel obstruction caused by deterioration of smooth muscles in the gut. Although the existing TDP43-mutant mice may be useful for studying drugs' effects on certain disease mechanisms, a drug's ability to extend survival would most probably be irrelevant to people.
A big problem is that the recent emphasis on translational research in academia is going to land many labs right into these problems. As the rest of that Nature article shows, the ways for a mouse study to go wrong are many, various, and subtle. If you don't pay very close attention, and have people who know what to pay attention to, you could be wasting time, money, and animals to generate data that will go on to waste still more of all three. I'd strongly urge anyone doing rodent studies, and especially labs that haven't done or commissioned very many of them before, to read up on these issues in detail. It slows things down, true, and it costs money. But there are worse things.
+ TrackBacks (0) | Category: Animal Testing | The Central Nervous System
December 18, 2013
Earlier, the unsuitability of mice in inflammation models was shown in a paper that should have been noted by anyone in the field. Just last month, a paper in Science detailed the problems with many animal studies (mouse and otherwise), particularly the smaller ones, which can suffer from bad statistics and poor protocols.
Now we have this, from PNAS. The authors, from the Roswell Park Institute and the EPA, say that standard rodent facility conditions are actually causing unintended chronic physiological stress:
We show here that fundamental aspects of antitumor immunity in mice are significantly influenced by ambient housing temperature. Standard housing temperature for laboratory mice in research facilities is mandated to be between 20–26 °C; however, these subthermoneutral temperatures cause mild chronic cold stress, ac- tivating thermogenesis to maintain normal body temperature. When stress is alleviated by housing at thermoneutral ambient temperature (30–31 °C), we observe a striking reduction in tumor formation, growth rate and metastasis. . .Overall, our data raise the hypothesis that suppression of antitumor immunity is an outcome of cold stress-induced thermo- genesis. Therefore, the common approach of studying immunity against tumors in mice housed only at standard room temperature may be limiting our understanding of the full potential of the antitumor immune response.
As mentioned in that last line, the problem seems to be with the adaptive immune system - this effect is driven by CD8+ T cells in almost every case, and sometimes by changes in CD4+ cells as well. Overall, housing mice at the recommended temperatures, which are on the cool side, seems to promote a general immunosuppression, which I think it's safe to say is not a factor that many people are taking into account. The animals have similar core body temperatures, but the extra burden of maintaining that in the cooler rooms is tipping some sort of balance - keeping all those immune systems running is apparently energetically costly, and they get downregulated.
This study looked at several sorts of tumorigenesis, but only for solid tumors, so the effects on leukemia, etc., are still unknown. You'd have to think, though, that several other disease areas could be affected by this situation as well - for example, how much of the uselessness of mice in inflammation models is caused by these changes? I'm simultaneously glad to see these things being uncovered, while being worried about how long it's taken to uncover them: what else are we missing?
+ TrackBacks (0) | Category: Animal Testing | Cancer
November 26, 2013
Here's an article from Science on the problems with mouse models of disease.
or years, researchers, pharmaceutical companies, drug regulators, and even the general public have lamented how rarely therapies that cure animals do much of anything for humans. Much attention has focused on whether mice with different diseases accurately reflect what happens in sick people. But Dirnagl and some others suggest there's another equally acute problem. Many animal studies are poorly done, they say, and if conducted with greater rigor they'd be a much more reliable predictor of human biology.
The problem is that the rigor of animal studies varies widely. There are, of course, plenty of well-thought-out, well-controlled ones. But there are also a lot of studies with sample sizes that are far too small, that are poorly randomized, unblinded, etc. As the article mentions (just to give one example), sticking your gloved hand into the cage and pulling out the first mouse you can grab is not an appropriate randomization technique. They aren't lottery balls - although some of the badly run studies might as well have used those instead.
After lots of agitating and conversation within the National Institutes of Health (NIH), in the summer of 2012 [Shai] Silberberg and some allies went outside it, convening a workshop in downtown Washington, D.C. Among the attendees were journal editors, whom he considers critical to raising standards of animal research. "Initially there was a lot of finger-pointing," he says. "The editors are responsible, the reviewers are responsible, funding agencies are responsible. At the end of the day we said, 'Look, it's everyone's responsibility, can we agree on some core set of issues that need to be reported' " in animal research?
In the months since then, there's been measurable progress. The scrutiny of animal studies is one piece of an NIH effort to improve openness and reproducibility in all the science it funds. Several institutes are beginning to pilot new approaches to grant review. For an application based on animal results, this might mean requiring that the previous work describe whether blinding, randomization, and calculations about sample size were considered to minimize the risk of bias. . .
Not everyone thinks that these new rules are going to work, though, or are even the right way to approach the problem:
Some in the field consider such requirements uncalled for. "I am not pessimistic enough to believe that the entire scientific community is obfuscating results, or that there's a systematic bias," says Joseph Bass, who studies mouse models of obesity and diabetes at Northwestern University in Chicago, Illinois. Although Bass agrees that mouse studies often aren't reproducible—a problem he takes seriously—he believes that's not primarily because of statistics. Rather, he suggests the reasons vary by field, even by experiment. For example, results in Bass's area, metabolism, can be affected by temperature, to which animals are acutely sensitive. They can also be skewed if a genetic manipulation causes a side effect late in life, and researchers try to use older mice to replicate an effect observed in young animals. Applying blanket requirements across all of animal research, he argues, isn't realistic.
I think, though, that there must be some minimum requirements that could be usefully set, even with every field having its own peculiarities. After all, the same variables that Bass mentions above - which are most certainly real ones - could affect studies in completely different fields. This, of course, is one of the biggest reasons that drug companies restrict access to their animal facilities. There's always a separate system to open those doors, and if you don't have the card to do it, you're not supposed to be in there. Pace the animal rights activists, that's not because it's so terrible in there that the rest of us wouldn't be able to take it. It's because they don't want anyone coming in there and turning on lights, slamming doors, sneezing, or doing any of four dozen less obvious things that could screw up the data. This stuff is expensive, and it can be ruined quite easily. It's like waiting for a four-week-long soufflé to rise.
That brings up another question - how do the animal studies done in industry compare to those done in academia? The Science article mentions some work done recently by Lisa Bero of UCSF. She was looking at animal studies on the effects of statins, and found, actually, that industry-sponsored research was less likely to find that the drug under investigation was beneficial. The explanation she advanced is a perfectly good one: if your animal study is going to lead you to spend the big money in the clinic, you want to be quite sure that you can believe the data. That's not to say that there aren't animal studies in the drug industry that could be (or could have been) run better. It's just that there are, perhaps, more incentives to make sure that the answer is right, rather than just being interesting and publishable.
Doesn't the same reasoning apply to human studies? It certainly should. The main complicating factor I can think of is that once a company, particularly a smaller one, has made the big leap into human clinical trials, it also has an incentive to find something that's good enough to keep going with, and/or good enough to attract more investment. So perverse incentives are, I'd guess, more of a problem once you get to human trials, because it's such a make-or-break situation. People are probably more willing to get the bad news from an animal study and just groan and say "Oh well, let's try something else". Saying that after an unsuccessful Phase II trial is something else again, and takes a bit more sang-froid than most of us have available. (And, in fact, Bero's previous work on human trials of statins seems to show various forms of bias at work, although publication bias is surely not the least of them).
+ TrackBacks (0) | Category: Animal Testing
February 13, 2013
We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.
But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.
These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.
Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.
What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.
And there are a lot of potential reasons why this turns out to be so:
There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.
But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.
There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):
The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.
The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.
The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.
“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”
“That started us thinking,” he continued. “Is it the same in the mouse or not?”
What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.
We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.
+ TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases
November 1, 2012
When I mentioned the people working in the research animal facilities before Hurricane Sandy, I had no idea that this was going to happen: thousands of genetically engineered and/or specially bred rodents were lost from an NYU facility due to flooding. The Fishell lab appears to have lost its entire stock of 2,500 mice, representing 10 years of work. Very bad news indeed for the people whose careers were depending on these.
+ TrackBacks (0) | Category: Animal Testing | Current Events
November 22, 2011
If you haven't seen it, this series by Daniel Engber at Slate, on the use of the mouse as a laboratory workhorse, is excellent. (And I'm not just saying that because he references some of my disparaging comments about xenograft models, although that did give me a chance to teach my kids what the word "acerbic" means).
He has a lot of good points, which will resonate with people who do research (and inform those who don't). For example, writing on the ubiquity of C57 black mice, he asks:
So one dark-brown lab mouse came to stand in for every other lab mouse, just as the inbred lab mouse came to stand in for every other rodent, and the rodent came to stand in for dogs and cats and rabbits and rhesus monkeys, the standard models that themselves stood in for all Animalia. But where is Black-6 taking us? How much can we learn from a single mouse?
A lot - but enough? That's always the background question with animal models. My take has long been that they're tricky, not always reliable, and still, infuriatingly, essential. The problem is that even things like xenograft models are terrible only on the absolute scale. On the relative scale - compared to all the other animal models for new oncology drugs - they're pretty good. And compared to not putting your drugs into an animal at all before going to humans, well. . .
+ TrackBacks (0) | Category: Animal Testing | Cancer
January 17, 2011
Some time ago, I took nominations for Least Useful Animal Models. There were a number of good candidates, many of them from the CNS field. A recent report makes me think that these are even stronger contenders than I thought.
The antidepressant reboxetine (not approved in the US, but sold in a number of other countries by Pfizer) was recently characterized by a German meta-analysis of the clinical data as "ineffective and potentially harmful". Its benefits versus placebo (and SSRI drugs) have been overestimated, and its potential for harm underestimated. It was approved in Europe in 1997, and provisionally by the FDA in 1999, although that was later rolled back when more studies came in that showed lack of efficacy.
Much has been made of the fact that Pfizer had not published many of the studies they conducted on the drug. These do seem, however, to have been available to regulatory authorities, and were the basis for the FDA's decision not to grant full approval. As that BMJ link discusses, though, there's often not a clear pathway, especially in the EU, for a regulatory agency to go back and re-examine a previous decision based on efficacy (as opposed to safety).
So the European regulatory agencies can be faulted for not revisiting their decision on this drug in a better (and quicker) fashion, and Pfizer can certainly be faulted for letting things stand (in the face of evidence that the drug was not effective). All this is worrisome, but these are problems that are being dealt with. Since 2007, for example, trials for the FDA have been required to be posted at clinicaltrials.gov, although the nontranparency of older data can make it hard to compare newer and older treatments in the same area.
What's not being dealt with as well is an underlying scientific problem. As this piece over at Scientific American makes plain, reboxetine, although clinically ineffective, works just fine in all the animal models:
And this is a rough moment for scientists studying depression. Why? Because reboxetine works beautifully in our animal models. It’s practically a poster-child antidepressant. It produces acute effects in tests such as forced-swim tests and tail-suspension tests (which use changes in struggle as a measure of antidepressant efficacy). It produces neurogenesis in the hippocampus, which is thought to be correlated with antidepressant effects. When behavioral pharmacologists are doing comparisons between older antidepressants and newer ones, reboxetine is often used as a positive control, a drug known to have an effect in the behavioral test of choice.
But it doesn’t work in patients. And patients are what matters. Now, scientists are stuck with a difficult question: What went wrong?
A very good question, and one without any very good answers. And this certainly isn't the first CNS drug to show animal model efficacy but do little good in people. So, how much is the state of the art advancing? Are we getting anywhere, or just doing the same old thing?
+ TrackBacks (0) | Category: Animal Testing | Clinical Trials | Regulatory Affairs | The Central Nervous System | The Dark Side
August 16, 2010
The topic of new drugs for cancer has come up repeatedly around here - and naturally enough, considering how big a focus it is for the industry. Most forms of cancer are the very definition of "unmet medical need", and the field has plenty of possible drug targets to address.
But we've been addressing many of them in recent years, with incremental (but only rarely dramatic) progress. It's quite possible that this is what we're going to see - small improvements that gradually add up, with no big leaps. If the alternative is no improvement at all, I'll gladly take that. But some other therapeutic areas have perhaps made us expect more. Infectious disease, for example: the early antibiotics looked like magic, as patients that everyone fully expected to die started asking when dinner was and when they could go home. That's what everyone wants to see, in every disease, and having seen it (even fleetingly), we all want to have it happen again.
And it has happened for a few tumor types, most notably childhood leukemia. But we definitely need to add more to the list, and it's been a frustrating business. Believe me, it's not like we in the business aiming for incremental improvements, a few weeks or months here and there. Every time we go after a new target in oncology, we hope that this one is going to be - for some sort of cancer - the thing that completely knocks it down.
We may be thinking about this the wrong way, though. For many years now, there have been people looking at genetic instability in tumor cells. (See this post from 2002 - yes, this blog has been around that long!) If this is a major component of the cancerous phenotype, it means that we could well have trouble with a target-by-target approach. (See this post by Robert Langreth at Forbes for a more recent take). And here's a PubMed search - as you can see, there's a lot of literature in this field, and a fair amount of controversy, too.
That would, in fact, mean that cancer shares something with infectious disease, and not, unfortunately, the era of the 1940s when the bacteria hadn't figured out what we could do to them yet. No, what it might mean is that many tumors might be made of such heterogeneous, constantly mutating cells that no one targeted approach will have a good chance of knocking them down sufficiently. Since that's exactly what we see, this is a hypothesis worth taking seriously.
There are other implications for drug discovery. Anyone who's worked in oncology knows that the animal tumor models we tend to use - xenografts of human cell lines - are not particularly predictive of success. "Necessary but nowhere near sufficient" is about as far as I'd be willing to go. Could that be because these cells, however vigorously they grow, have lost (or never had) that rogue instability that makes the wild-type tumors so hard to fight? I haven't seen a study of genetic instability in these tumor lines, but it would be worth checking.
What we might need, then, are better animal models to start with - here's a review on some efforts to find them. From a drug discovery perspective, we might want to spend more time on oncology targets that work outside the cancer cells themselves. And clinically, we might want to spend more time studying combinations of agents right from the start, and less on single-drug-versus-standard-of-care studies. The disadvantage there is that it can be hard to know where to start - but we need to weigh that against the chances of a single agent actually working
+ TrackBacks (0) | Category: Animal Testing | Cancer | Clinical Trials | Drug Development
April 26, 2010
I don't think we saw this one coming: Charles River Labs has announced that they're buying WuXi PharmaTech. They're paying about a 28% premium over Friday's closing stock price - Charles River's CEO will stay on, and WuXi's founder (Li Ge) will serve as executive VP under him.
Charles River, which is strong in the animal-testing end of the business, has apparently decided that Wu Xi is one of their biggest competitors (I'd agree) and has decided to try to stake out a leading position in the whole contract-research space. It's interesting to me that the folks at Wu Xi bought into this reasoning as well, although (since they're a publicly traded company here in the US), a lucrative stock offer can be its own argument. One now wonders, though, about the company's statements on re-staffing some of their US labs when economic conditions improve. . .
+ TrackBacks (0) | Category: Animal Testing | Business and Markets | Drug Assays | Drug Development
March 30, 2010
A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you're wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.
I hadn't seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn't symmetrical - and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn't appear to be just due to less-precise studies tending to show positive effects for some reason - it strongly suggests that there are negative studies that just haven't been reported.
The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.
The loss of experiments that showed no effect shouldn't surprise anyone - after all, it's long been known that publishing such papers is just plain harder than publishing ones that show something happening. There's an obvious industry bias toward only showing positive data, but there's an academic one, too, which affects basic research results. As the authors put it:
These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.
And there's an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):
The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.
This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it's clear that this can't happen if the results are just sitting on someone's hard drive. But it's also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing's for free.
+ TrackBacks (0) | Category: Animal Testing | Cardiovascular Disease | Clinical Trials | Drug Assays | The Scientific Literature
March 3, 2010
Well, here's a brow-furrowing paper, courtesy of PNAS. Th authors, from the National Institute on Aging, contend that most laboratory rodents are overfed, under-stimulated, and are (to use their phrase) "metabolically morbid". This affects their suitability as control and experimental animals for a wide variety of assays.
There seem to be effects across the board - the immune system, glucose and lipid handling, cardiovascular numbers, susceptibility to tumors, cognitive performance. The list is a long one, and the route causes seem to be ad libitum feeding and lack of exercise. The beneficial effects of some drugs in rodent models, the authors propose, could be due (at least in part) to their ability to reverse the artificial conditions that the animals are maintained under, and the application of these results to the real world could be doubtful. (The same concerns don't apply nearly as much to larger animals such as dogs and primates. They're handled differently, and their physiologies don't seem to be altered, or at least nowhere near as much).
Of course, some people live similar lifestyles, as far as the lack of activity and ad libitum feeding goes, so I have to wonder about the rodents being better test animals than one might wish for. But overall, this seems like a useful wake-up call to the animal testing community, especially in some therapeutic areas. On a domestic level, I'm thinking through the implications of this for the two guinea pigs my children have - they seem to sit around and eat all the time. The guinea pigs, I mean, not the kids.
+ TrackBacks (0) | Category: Animal Testing
August 13, 2009
Why do we test drugs on animals, anyway? This question showed up in the comments section from a lay reader. It's definitely a fair thing to ask, and you'd expect that we in the business would have a good answer. So here it is: because for all we know about biochemistry, about physiology and about biology in general, living systems are still far too complex for us to model. We're more ignorant than we seem to be. The only way we can find out what will happen if we give a new compound to a living creature is to give it to some of them and watch carefully.
That sounds primitive, and I suppose it is. We don't do it in a primitive manner, though. We watch with all the tools of our trade - remote-control physiological radio transmitters, motion-sensing software hooked up to video cameras, sensitive mass spectrometry analysis of blood, of urine, and whatever else, painstaking microscopic inspection of tissue samples, whatever we can bring to bear. But in the end, it all comes down to dosing animals and waiting to see what happens. That principle hasn't changed in decades, just the technology we use to do it.
No isolated enzymes can yet serve as a model for what can happen in a single real cell. And no culture of cells can recapitulate what goes on in a real organism. The signaling, the feedback loops, the interconnectedness of these systems is (so far) too much for us to handle. We keep discovering new pathways all the time, things that no model would have included because we didn't even know that they were there. The end is not yet in sight, occasional newspaper headlines to the contrary.
We do use all those things as filters before a compound even sees its first rodent. In a target-driven approach, which is the great majority of the industry, if a compound doesn't work on an isolated protein, it doesn't go on to the cell assay. If it doesn't work on the cells, it doesn't go on to animals. (And if it kills cells, it most certainly doesn't go on to the animals, unless it's some blunderbuss oncology agent of the old school). The great majority of compounds made in this business have never been given to so much as one mouse, and never will.
So what are we looking for when we finally do dose animals? We're waiting to see if the compound has the effect we're hoping for, first off. Does it lower blood pressure, slow or stop the growth of tumors, or cure viral infections? Doing these things requires having sick animals, of course. But we also give the drug candidates to healthy ones, at higher doses and for longer periods of time, in order to see what else the compounds might do that we don't expect. Most of those effects are bad - I'd casually estimate 99% of the time, anyway - and many of them will stop a drug candidate from ever being developed. The more severe the toxic effect, the greater the chance that it's based on some fundamental mechanism that will be common to all animals. In some cases we can identify what's causing the trouble, once we've seen it, and once in a great while we can use that information to argue that we can keep going, that humans wouldn't be at the same risk. But this is very rare - we generally don't know enough to make a persuasive case. If your compound kills mice or kills rats, your compound is dead, too.
I've lost count of the number of compounds I've worked on that have been pulled due to toxicity concerns; suffice it to say that it's a very common thing. Every time it's been something different, and it's often not for any of the reasons I feared beforehand. I've often said here that if you don't hold your breath when your drug candidate goes into its first two-week tox testing, then you haven't been doing this stuff long enough.
Here's the problem: giving new chemicals to animals to see if they get sick (and making animals sick so that we can see if they get better) are not things that are directly compatible with trying to keep animals from suffering. Ideally, we would want to do neither of those things. Fortunately, several factors all line up in the same direction to keep things moving toward that.
For one thing, animal testing is quite expensive. Only human testing is costlier. In this case, ethical concerns and capitalist principles manage to line up very well indeed. Doing assays in vitro is almost invariably faster and cheaper, so whenever we can confidently replace a direct animal observation with an assay on a dish, plate, or chip, we do. All that equipment I mentioned above has also cut down on the number of animals needed, and that trend is expected to continue as our measurements become more sensitive.
So things are lined up in the right direction. Any company that found a reliable way to eliminate any significant part of its animal testing would immediately find itself in a better competitive position.
And for the existing tests, it's also fortunate that unhappy animals give poor data. We want to observe them under the most normal conditions possible, not with stress hormones running through their systems, and a great deal of time and trouble (and money) goes toward that end. (In this case, it's scientific principles that line up with ethical ones). Diseased animals are clearly going to be in worse shape than normal ones, but in these situation, too, we try to minimize all the other factors so we're getting as clear a read as possible on changes in the disease itself.
So that's my answer: we use animals because we have (as yet) no alternative. And our animal assays prove that to us over and over by surprising us with things we didn't know, and that we would have had no other opportunity to learn. We'd very much like to be able to do things differently, since "differently" would surely mean "faster and more cheaply". None of us enjoy it when our compounds sicken healthy animals, or have no effect on sick ones. Just the wasted time and effort alone is enough to make any drug discoverer think so. There are billions of dollars waiting to be picked up by anyone who finds a better way.
+ TrackBacks (0) | Category: Animal Testing | Pharma 101
August 11, 2009
Novartis has had trouble for years with animal rights activists, and now things are getting nastier than ever:
Novartis CEO Daniel Vasella says the people who burned down his holiday home and defiled his family's graves are not criminals but "terrorists" beyond dialogue.
In an interview with the SonntagsBlick newspaper, the 55-year-old chief executive said the attacks have changed his life and that more needs to be done to rein in the animal-rights extremists believed responsible for the "wicked" acts.
Last week Vasella's home in Austria was set on fire. In July his mother's urn was stolen and his dead 19-year-old sister's grave was desecrated. Crosses bearing his name and that of his wife were placed in a Chur cemetery. Workers' cars have been torched and angry graffiti sprayed on walls. . .
"How far do things have to go before you can speak of terrorism?" Vasella told the newspaper.
I'd say that's far enough, definitely. If that's not being done with intent to terrorize, then what? One idiotic part of the whole business is that the protesters seem to be trying to get Novartis to stop working with Huntingdon Life Sciences, the British animal testing company. (Similar tactics have been used elsewhere). But Novartis says that they currently have no relationship at all with HLS, and haven't for several years.
Mere statements of dull fact, though, won't make a dent in the self-righteousness of the sorts of people who think that spray-painting gravestones is a blow for justice.
+ TrackBacks (0) | Category: Animal Testing | Why Everyone Loves Us
October 16, 2008
A key step in all drug discovery programs are the cellular and animal models. The cells are the first time that the compounds are exposed to a living system (with cellular membranes that keep things out). The animals, of course, are a very stringent test indeed, with the full inventory of absorption, metabolism, and excretion machinery, along with the possibility of side effects in systems that you might not have even considered.
So it’s a tricky business to make sure that these tests are being done in the most meaningful way possible. You can knock your project out of promising areas for development if your model systems are too tough – and it’s even easier to water them down in the interest of getting numbers that make everyone feel better. “As stringent as they need to be” is the rule, but it’s a hard one to handle in practice.
Take, for example, the antibacterial field. The first cell assays there are unusually meaningful, since they’re being done on the real live targets of the drugs. (That doesn’t do much to get you past the high barrier of animal testing, though, since you have to see if your compounds that kill bacteria in a dish will still do it in that much more demanding environment). But there are all sorts of strains of bacteria out there, and it’s up to you to choose the ones that will tell you the most about what your compounds can do.
One way that bacteria evade being killed off by our wonder drug candidates is by pumping the compounds right back out once they get in. There are quite a few of the efflux pumps, and wild-type bacteria (particularly the resistant strains) are well stocked with them. You can culture all sorts of mutants, though, with these various transport mechanisms ablated or wiped out completely. If your compound doesn’t work on the normal lines, but cuts a swath through some of these, you have good evidence that your problem is efflux pumping, not some intrinsic problem with your target mechanism.
The problem is, we often don’t have a very good idea of what to do about efflux pumping. These proteins recognize a huge variety of different structures, and there aren’t really many useful ways to predict what they’ll take up versus what they’ll leave alone. In many cases, you just have to throw all sorts of variations at them and hope for the best. (The same goes for the other situations where active transport can be a big factor, such as with cancer cells and the blood-brain barrier).
So, how do you set up your assays? You can run the crippled bacteria first, which will give you an idea of the intrinsic potencies of your compounds, minus the pumping difficulty. That may be the way to go but you’d better follow that up with some things closer to wild-type, or you’re going to end up kidding yourself. Having a compound that infallibly kills only those bacteria that can’t spit it out is probably not going to do you (or anyone else) much good, considering what the situation is like out in the real world.
The same principle holds for other assays, all the way up to rats. If you run a relative pushover model in oncology, you can put up a very impressive plot of how powerful your compounds are. But what does that do for you in the end? Or for cancer patients, whose malignant cells are much more wily and aggressive? The best course, I’d say, is to run the watered-down models if they can tell you something that will help you move things along. But get to the wild-types, the real thing, as soon as possible. Those latter models may tell you things that you don’t want to hear – but that doesn’t mean that you don’t need to hear them.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development
April 3, 2008
I was having a discussion the other day about which therapeutic areas have the best predictive assays. That is, what diseases can you be reasonably sure of treating before your drug candidate gets into (costly) human trials? As we went on, things settled out roughly like this:
Cardiovascular (circulatory): not so bad. We’ve got a reasonably good handle on the mechanisms of high blood pressure, and the assays for it are pretty predictive, compared to a lot of other fields. (Of course, that’s also now one of the most well-served therapeutic areas in all of medicine). There are some harder problems, like primary pulmonary hypertension, but you could still go into humans with a bit more confidence than usual if you had something that looked good in animals.
Cardiovascular (lipids): deceptive. There aren’t any animals that handle lipids quite the way that humans do, but we’ve learned a lot about how to interpolate animal results. That plus the various transgenic models gives you a reasonable read. The problem is, we don’t really understand human lipidology and its relation to disease as well as we should (or as well as a lot of people think we do), so there are larger long-term problems hanging over everything. But yeah, you can get a new drug with a new mechanism to market. Like Vytorin.
CNS: appalling. That goes for the whole lot – anxiety, depression, Alzheimer’s, schizophrenia, you name it. The animal models are largely voodoo, and the mechanisms for the underlying diseases are usually opaque. The peripheral nervous system isn’t much better, as anyone who’s worked in pain medication will tell you ruefully. And all this is particularly disturbing, because the clinical trials here are so awful that you’d really appreciate some good preclinical pharmacology: patient variability is extreme, the placebo effect can eat you alive, and both the diseases and their treatments tend to progress very, very slowly. Oh, it’s just a nonstop festival of fun over in this slot. Correspondingly, the opportunities are huge.
Anti-infectives: good, by comparison. It’s not like you can’t have clinical failures in this area, but for the most part, if you can stop viruses or kill bugs in a dish, you can do it in an animal, or in a person. The questions are always whether you can do it to the right extent, and just how long it’ll be before you start seeing resistance. With antibacterials that can be, say, "before the end of your clinical trials". There aren’t as many targets here as everyone would like, and none of them is going to be a gigantic blockbuster, but if you find one you can attack it with more confidence than usual.
Diabetes: pretty good, up to a point. There are a number of well-studied animal models here, and if your drug’s mechanism fits their quirks and limitations, then you should be in fairly good shape. Not by coincidence, this is also a pretty well-served area, by current standards. If you’re trying something off the beaten path, though, a route that STZ or db/db rats won’t pick up well, then things get harder. Look out, though, because this disease area starts to intersect with lipids, which (it bears saying again) We Don't Understand Too Well.
Obesity: deceptive in the extreme. There are an endless number of ways to get rats to lose weight. Hardly any of them, though, turn out to be relevant to humans or relevant to something humans would consider paying for. (Relentless vertigo would work to throw the animals off their feed, for example, but would probably be a loser in the marketplace. Although come to think of it, there is Alli, so you never know). And the problem here is always that there are so many overlapping backup redundant pathways for feeding behavior, so the chances for any one compound doing something dramatic are, well, slim. The expectations that a lot of people have for a weight-loss therapy are so high (thanks partly to years of heavily advertised herbal scams and bizarre devices), but the reality is so constrained.
Oncology: horrible, just horrible. No one trusts the main animal models in this area (rat xenografts of tumor lines) as anything more than rough, crude filters on the way to clinical trials. And no one should. Always remember: Iressa, the erstwhile AstraZeneca wonder drug from a few years back, continues to kick over all kinds of xenograft models. It looks great! It doesn’t work in humans! And it's not alone, either. So people take all kinds of stuff into the clinic against cancer, because what else can you do? That leads to a terrifying overall failure rate, and has also led to, if you can believe it, a real shortage of cancer patients for trials in many indications.
OK, those are some that I know about from personal experience. I’d be glad to hear from folks in other areas, like allergy/inflammation, about how their stuff rates. And there are a lot of smaller indications I haven’t mentioned, many of them under the broad heading of immunology (lupus, MS, etc.) whose disease models range from “difficult to run and/or interpret” on the high side all the way down to “furry little random number generators”.
+ TrackBacks (0) | Category: Animal Testing | Cancer | Cardiovascular Disease | Diabetes and Obesity | Drug Assays | Drug Development | Infectious Diseases | The Central Nervous System
January 29, 2008
I've had some questions about animal models and testing, so I thought I'd go over the general picture. As far as I can tell, my experience has been pretty representative.
There are plenty of animal models used in my line of work, but some of them you see more than others. Mice and rats are, of course, the front line. I’ve always been glad to have a reliable mouse model, personally, because that means the smallest amount of compound is used to get an in vivo readout. Rats burn up more hard-won material. That's not just because they're uglier, since we don’t dose based on per cent ugly, but rather because they're much larger and heavier. The worst were some elderly rodents I came across years ago that were being groomed for a possible Alzheimer’s assay – you don’t see many old rats in the normal course of things, but I can tell you that they do not age gracefully. They were big, they were mean, and they were, well, as ratty as an animal can get. (They were useless for Alzheimer's, too, which must have been their final revenge).
You can’t get away from the rats, though, because they’re the usual species for toxicity testing. So if your pharmacokinetics are bad in the rat, you’re looking at trouble later on – the whole point of tox screens is to run the compound at much higher than usual blood levels, which in the worst cases you may not be able to reach. Every toxicologist I’ve known has groaned, though, when asked if there isn’t some other species that can be used – just this time! – for tox evaluation. They’d much rather not do that, since they have such a baseline of data for the rat, and I can’t blame them. Toxicology is an inexact enough science already.
It’s been a while since I’ve personally seen the rodents at all, though, not that I miss them. The trend over the years has been for animal facilities to become more and more separated from the other parts of a research site – separate electronic access, etc. That’s partly for security, because of people like this, and partly because the fewer disturbances among the critters, the better the data. One bozo flipping on the wrong set of lights at the wrong time can ruin a huge amount of effort. The people authorized to work in the animal labs have enough on their hands keeping order – I recall a run of assay data that had an asterisk put next to it when it was realized that a male mouse had somehow been introduced into an all-female area. This proved disruptive, as you’d imagine, although he seemed to weather it OK.
Beyond the mouse and rat, things branch out. That’s often where the mechanistic models stop, though – there aren’t as many disease models in the larger animals, although I know that some cardiovascular disease studies are (or have been) run in pigs, the smallest pigs that could be found. And I was once in on an osteoporosis compound that went into macaque monkeys for efficacy. More commonly, the larger animals are used for pharmacokinetics: blood levels, distribution, half-life, etc. The next step for most compounds after the rat is blood levels in dogs – that’s if there’s a next step at all, because the huge majority of compounds don’t get anywhere near a dog.
That’s a big step in terms of the seriousness of the model, because we don’t use dogs lightly. If you’re getting dog PK, you have a compound that you’re seriously considering could be a drug. Similarly, when a compound is finally picked to go on toward human trials, it first goes through a more thorough rat tox screen (several weeks), then goes into two-week dog tox, which is probably the most severe test most drug candidates face. The old (and cold-hearted) saying is that “drugs kill dogs and dogs kill drugs”. I’ve only rarely seen the former happen (twice, I think, in 19 years), but I’ve seen the second half of that saying come true over and over. Dogs are quite sensitive – their cardiovascular systems, especially – and if you have trouble there, you’re very likely done. There’s always monkey data – but monkey blood levels are precious, and a monkey tox screen is extremely rare these days. I’ve never seen one, at any rate. And if you have trouble in the dog, how do you justify going into monkeys at all? No, if you get through dog tox, you're probably going into man, and if you don't, you almost certainly aren't.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development | Pharmacokinetics | Toxicology
December 5, 2007
I’ve had reports that some of the animal rights activists are getting loud and lively down in Connecticut, to the point of harassing employees of some of the drug companies there. I remember some of this going on in the early 1990s in New Jersey, but this is the first big outbreak of this stuff I can remember since then. This latest outbreak seems to be part of their long-running (and to my mind misguided) campaign against Huntingtdon Life Sciences.
I won’t go into the specifics of what I’ve been hearing, because I don’t want to encourage the people who do it. What I’ll say is that all this shouting-on-the-street and ominous-flyers-under-the-windshield-wiper stuff doesn’t do the animal folks any credit, not that they care. A rational debate on the issues involved would be just fine by me, and I don’t think it would take very long. But since I doubt that my readership overlaps much with the kind of people who try to publicly intimidate scientists, and I further doubt that those people are open to rational debate. So I don’t see that happening here.
This, then, is just a heads-up for the researchers that do come here, most of whom work, directly or indirectly, with animal assays and the data they produce. Keep your eyes open. It wouldn’t be prudent to bet on all of these activists being harmless. Make sure you know who you’re letting into your building, and so on. The actions of True Believers can be difficult to anticipate, no matter what their cause.
And for my readers outside the industry – yes, we do indeed use animal testing. Mice take the brunt of it, followed by rats. It’s very difficult, expensive, and time-consuming, and we’d drop it in a minute if we could, just for those reasons. But no one knows enough about living organisms yet to do that. Not even close. For the foreseeable future, there’s no other way to do medical research, academic or industrial, basic or applied. Anyone who tells you differently is either misinformed or lying, and anyone who knows better but still tries to shut down the research is ethically deranged.
+ TrackBacks (0) | Category: Animal Testing
April 26, 2007
When I wrote about lousy animal models of disease a few days ago, there was a general principle at the back of my mind. (There generally is - my wife, over the years, has become accustomed to the sudden dolly-back panorama shots that appear unannounced in my conversation). It was: that a bad model system is much, much worse than no model system at all.
I've been convinced of that for a long time. When you have no model for what you're doing, you're forced to realize that you have no clear idea of what's going on. That's uncomfortable, to be sure, but you at least realize the situation. But when you have a poor model, the temptation to believe in it, at least partially, is hard to resist. Even if it's giving you the right answers at a rate worse than chance, you can still take (irrational) comfort in knowing that at least you're not flying blind - even as you do worse than the people who are.
There are many reasons to hold on to an underperforming model. Sometimes pride is the problem. I've seen groups that stuck with assays just because they'd invented them, even though the method was slowly wasting everyone's time. Never underestimate cluelessness, either. People will use worthless techniques for quite a while if they're not in the habit of checking to see if they're any good. But the biggest reason that useless procedures hang around, I'm convinced, is fear.
Fear, that is, of being left out in the middle of the field with no models, no insights, and no path forward at all. It's a bad feeling, rather scary, and rather difficult to explain to upper management if you're a project leader. Better, then, to hold on to the assays and models you have, to defend them even if you're not sure you trust them. With any luck, the project will end (although probably not happily) before the facts have to be faced. As Belloc advised children in other situations: "Always keep ahold of Nurse / For fear of finding something worse."
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development
April 20, 2007
I was talking to someone the other day about animal models, and that got me to thinking: there are several therapeutic areas with reasonably good ones, but which indication has the most useless ones?
Naturally, just getting a compound into mice or what have you is going to tell you a lot that you'd never learn otherwise. (Try predicting oral absorption and let me know how well you make out, for example). That's the rough equivalent of a Phase I for animal studies. But finding an animal model of disease (the rough equivalent of Phase II) is a lot trickier. (One of the better ones I can think of is diabetes, and even there you have to work carefully, because a mutant db/db mouse really didn't get to its condition by the same path a human type II patient did).
By "worst animal model", I mostly mean "least predictive". There are some that are a major pain to set up and run, but give you some data that you can at least believe in a bit, and I wouldn't put them in the same class. My nominee are the traditional models that have been used for Alzheimer's. No rodent (heck, no other animal at all) develops the real AD pathology, so there's one strike against you. Years of work on mutants of all stripes haven't (to my knowledge) been able to get around that problem.
And the disease is affecting higher brain functions that are very poorly modeled in any of the small animals, which is strike two. When I used to work in the field, I would occasionally wonder about the relevance of watching a rat ran into one half of his cage or another to a person forgetting an important appointment. Some of the techniques also have the lotsa-work factor going for them, too, like the infamous Morris Swim Maze, which needs its own special room, full of special equipment, and a full-time person trained in its complications to generate the data that you still don't quite trust.
So, that's my candidate. Readers are invited to submit their own - remember, arduous but trustworthy doesn't make the cut. The winner will be arduous and useless.
+ TrackBacks (0) | Category: Animal Testing
June 20, 2006
A comment to the last post asked a good question, one that occurs to everyone in the drug industry early in their career: how many useful drugs do we lose due to falsely alarming toxicity results in animals?
The answer is, naturally, that we don't know, and we can't. Not in the world as we know it, anyway. The only way to really find out would be to give compounds to humans that have shown major problems in rats and dogs, and that's just not going to happen. It's unethical, it's dangerous, and even if you didn't care about such things, the lawyers would find some thing you did care about and go after it.
But how often does this possibility come up? Well, all the time, actually. I don't think that the industry's failure rates are well appreciated by the general public. The 1990s showed that about one in ten compounds that entered Phase I made it through to the market, which is certainly awful enough. But rats and dogs kill compounds before they even get to Phase I, and the failure rate of initiated projects making it to the clinic at all is much higher.
So it's not like we take all these rat-killers on to humans, despite what the lunatic fringe of the pharma-bashers might think. Nope, these are the safe ones that go on to cause all the trouble. "Oh, but are they?" comes the question. "How do you know that your animal results aren't full of false green lights, too?" That's a worrisome question, but there are a lot of good reasons to think that the things we get rid of are mostly trouble. For all the metabolic and physiological differences between rodents, dogs, and humans, there are even more important similarities. The odds are that most things that will sicken one of those animals are going to land on a homologous pathway in humans. And the more basic and important the pathway is, the greater the chance (for the most part) that the similarities will be still be strong enough to cause an overlap.
But there are exceptions in both directions. We know for a fact that there are compound that are more toxic to various animal species than they are to humans, and vice versa. But we play the odds, because we have no choice. Whenever a compound passes animal tox, we hope that it won't be one of the rare ones that's worse in humans. But when a compound fails in the animals, there's simply no point in wondering if it might be OK if it were taken on. Because it won't be.
+ TrackBacks (0) | Category: Animal Testing | Clinical Trials | Toxicology
June 19, 2006
So, you're developing a drug candidate. You've settled on what looks like a good compound - it has the activity you want in your mouse model of the disease, it's not too hard to make, and it's not toxic. Everything looks fine. Except. . .one slight problem. Although the compound has good blood levels in the mouse and in the dog, in rats it's terrible. For some reason, it just doesn't get up there. Probably some foul metabolic pathway peculiar to rats (whose innards are adapted, after all, for dealing with every kind of garbage that comes along). So, is this a problem?
Well, yes, unfortunately it is. Rats are the most beloved animal of most toxicologists, you see. (Take a look at the tables in this survey, and note how highly the category "rodent toxicology" always places). More compounds have gone through rat tox than any other species, so there's a large body of experience out there. And the toxicologists just hate to go without it. Now, a lot of compounds have been in mice, for sure, but they just aren't enough of a replacement. The two rodent species don't line up as well as you'd think. And there's no other small animal with the relevency and track record of the noble rat. (People outside the field are sometimes surprised to learn that guinea pigs aren't even close - they get used in cardiovascular work, but that's about it).
So if your compound is a loser in the rat, you have a problem. You can pitch to go straight into larger animals, but that's going to be a harder sell without rat data. If your project is a hot one, with lots of expectations, you'll probably tiptoe into dog tox. But if it's a borderline one, having the rats drop out on you can kill the whole thing off. They use up a lot of compound compared to the mouse, they're more likely to bite your hand, and they're an order of magnitude less sightly. But respect the rat nonetheless.
+ TrackBacks (0) | Category: Animal Testing | Toxicology
May 9, 2006
Many readers will have heard of the years-long campaign in England against Huntingdon Life Sciences, a research animal breeding and testing company. (These tug-of-war articles from Wikipedia on HLS and the campaign against it are detailed overviews, as well as a good example of that site's simultaneous strengths and weaknesses).
Now shareholders of GlaxoSmithKline, one of Huntingdon's customers, are getting anonymous letters from activists, threatening them with release of (unspecified) personal information if they don't sell their shares. These are similar tactics to the ones these groups used when HLS was trying to list on the Hew York Stock Exchange last year. You'd think that these attacks would have slowed down after the recent convictions of several anti-Huntingdon activists for terrorist activities, but apparently not.
In that case, names and addresses of researchers and investors were listed on a web site as well, but the defendants claimed that they had nothing to do with the violence and harassment that often followed. This defense was undermined by the evidence of their own statements, some posted on the web and some caught on videotape, friendly things like "The police can't protect you!"
Now, if anyone has been writing passionate, outraged books and screenplays about the researchers who've been carrying on through all this, I've missed them. That's because no one likes the idea of animal experimentation - it's not going to sell popcorn at the multiplex, that's for sure. And, to be frank, it's not like those of us who design, order, and carry out the experiments are high-fiving each other about how many rats we've gone through, either.
It's true: I don't actually like the fact that every successful modern drug has risen to its place on top of a small mountain of dead animals. But not liking doesn't keep it from being true, and not liking it doesn't mean that I have an alternative, either. I don't. What the animal rights campaigners - the more rational ones, anyway - don't seem to realize is that tens of millions of dollars are waiting for the person who can come up with a way of not using so many mice, rats, and dogs. (The less rational ones wouldn't care even if they knew).
They're expensive, you know, animals are. We don't just have them running around in rooms with a bunch of straw on the floor. They live in facilities that are expensive to build and expensive to maintain, and you have to hire a lot of people whose only job is to take care of them. The anti-testing people seem to have visions of drug company employees cackling at the thought of getting to use more animals, when the truth is that we'd dump them in a minute if we could.
But here's the hard part: we can't. Not for now, and not for some time to come. We don't know enough biology to do it. As it stands, if you were able to model every relevant system in a rat, well enough to use your model for predictive screening, you'd have basically built a rat yourself. We get surprised all the time when our compounds go into animals, and every time it happens, it shows how little we really know.
No, the system we have isn't pretty, and it sure isn't cheap, but there's nothing yet that can replace it. In the meantime, the rats die or the people do. I don't have a hard time choosing.
+ TrackBacks (0) | Category: Animal Testing
January 18, 2005
I've mentioned before that one of our big problems in the drug industry seems to be finding compounds that work in man. I know, that sounds pretty obvious, but the statement improves when you consider the reasons why compounds fail. Recent studies have suggested that these days, fewer compounds are failing through some of the traditional pathways, like unexpectedly poor blood levels or severe toxicity.
In the current era, we seem to be getting more compounds that make it to man with reasonable pharmacokinetics (absorption from the gut, distribution and blood levels, etc.) and reasonably clean toxicity profiles. Not all of them, by any means - there are still surprises - but the stuff that makes it into the clinic these days is of a higher standard than it was twenty years ago. But that leaves the biggest single reason for clinical failure now as lack of efficacy against the disease.
That failure is the sum of several others. We're attacking some diseases that are harder to understand (Alzheimer's, for example), and we're doing so with some kind of mechanistic reason behind most of the compounds. Which is fine, as long as your understanding of the disease is good enough to be pretty sure that the mechanism is as important as you think it is. But the floor is deep with the sweepings of mechanistically compelling ideas that didn't work out at all in the clinic - dopamine D3 ligands for schizophrenia, leptin (and galanin, and neutropeptide Y) for obesity, renin inhibitors for hypertension. I'm tempted to add "highly targeted angiogenesis inhibitors for cancer" to the list. The old-fashioned way of finding a compound that works, and no matter how, probably led to fewer efficacy breakdowns (for all that method's other problems.)
Another basic problem is that our methods of evaluating efficacy, short of just giving the compound to a sick person and watching them for a while, aren't very reliable. If I had to pick the therapeutic area that's most in need of a revamp, I'd have to say cancer. The animal models there are numerous, rich in data, and will tell you things that you want to hear. It's just that they don't seem to do a very good job telling you about what's going to work in man. I will point out that Iressa, for one, works just fine in many of the putatively relevant models.
The journal Nature Reviews: Drug Discovery (which is probably the best single journal to read for someone trying to understand pharma research) published a provocative article a couple of years ago on this subject. The author (the now late) David Horrobin, compared some parts of modern drug discovery to Hesse's Glass Bead Game: complex, interesting, internally consistent and of no relevance to the world outside. They got a lot of mail. Now the journal has promised a series of papers over the next few months on animal models and their relevance to human disease, and I'm looking forward to them. We need to hit the reset button on some of our favorites.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development
August 1, 2004
The phrase "guinea pig" entered the language a long time ago as slang for "test animal", but I've yet to make a compound that's crossed a guinea pig's lips. Guinea pigs are still used for a few special applications, but since the beginning of my career, I've been surrounded (metaphorically!) by rats and mice.
Of the two, I prefer the mice. That's probably because they're smaller, and need correspondingly less effort from people like me to make enough drugs to dose them. The animal-handling folks prefer them for similar reasons: rats are more ornery, and they can fetch you a pretty useful bite if they're in the mood. When I was working in Alzheimer's disease, we had a small group of elderly rats that we were checking for memory problems. If that makes you think of rat-sized rocking chairs, think again. These were big ugly customers, feisty, wily critters that knew all the tricks and were no fun to deal with. Give me mice any day.
Of course, there are mice and there are mice. "Wild-type" mice are pretty hearty, but we don't use rodents captured out in the meadow. They're too variable, not to mention being loaded down with all sorts of interesting diseases. Every rodent we use in the drug industry comes from one of the big supply houses. Even our wild-types are a particular strain, identified with a catchy moniker like "K57 Black Swiss."
You're in good shape if you can use regular animals for your drug efficacy tests, but we often work on diseases which have no good rodent equivalents. People in diabetes projects, for example, often use mutant mice such as the db/db and ob/ob strains, which are genetically predisposed to put on weight. Eventually they can show some (but not all) of the signs of Type II diabetes. They can get pretty hefty - you'd better plan on making more compound if you're going to be testing things in those guys. Meanwhile, cancer researchers go through huge number of the so-called nude mice, a nearly hairless mutant variety with a compromised immune system. You've got to know what you're doing when you have a big group of those guys, because you can imagine how a contagious rodent disease could tear through them.
All the mutant animal lines are damaged in one form or another, since they're supposed to serve as a model of a disease. (Actually, most mutants in any animal population are damaged, since in a living system it's a lot easier to make a random change for the worse than it is to make one for the better.) They're just not as robust as the wild types. They need special handling, and they can't tolerate all the methods of compound dosing that a normal animal can. In some cases, you're restricted to the mildest, tamest vehicle solutions. (You know, the ones you can't get any of your compounds to go into.)
And there's always that nagging doubt about how valid your animal models might be. Some research areas have worked out a pretty good correlation between what works in people and what works in mice, but many of us are still stumbling around. The more innovative your work, the less of an idea you have about whether you're wasting your time. 'Twas ever thus.
+ TrackBacks (0) | Category: Animal Testing
July 25, 2002
I've had some e-mail asking if the diabetes drug I mentioned the other day is dead or not, and if not, why not. I don't have any direct contacts in the companies involved, not that they'd tell me all about it even if I did, but I can make some informed guesses. They'll illustrate what happens in these cases.
Readers in the industry will know that this situation (dramatically worse tox results in one species versus another) is a common one. You'd think that mice and rats, for example, would be pretty similar, but there are real differences at every level (from gross anatomy to molecular biology.)
To get off topic for a minute, that's one reason that I'm only partially impressed by figures showing how humans and (fill in the species) share (fill in some high percentage) of their DNA sequences. It's interesting, in one way, but the differences that do exist count for an awful lot.
Differences in toxicology between species, of course, are why the FDA (and drug companies themselves) want to see tox results from more than one species. The more, the better. Most of the time, it's rats and dogs, sometimes rats and monkeys, sometimes all three. Mice aren't considered quite as predictive a species - they're OK for rough-and-ready tox screening (and you need a lot less compound to do it that way,) but not for real decision making.
That's why I'm sure that Novo and Dr. Reddy's weren't thrilled at seeing bladder cancer in the rats, with much less of it in the mice. If it had been the other way around, the path forward might have been a little bit easier, but it'd be hard no matter what. Their compound isn't dead yet, I assume. But what it'll need to go forward is an idea of what the mechanism of the carcinogenesis might be.
Is is the parent compound causing trouble, or some metabolite? Which one? How much of it is in the urine, and how long does it stay there? As mentioned the other day, do rats make more of any of the metabolites, or are they just more sensitive to them? And, the big question once those have been answered: what do we know about how humans might behave?
If the companies have a backup compound waiting in the wings, then we can assume that it's already in intense tox trials. If it's clean, then the original drug is dead, of course, and the backup goes on, more or less as if nothing had happened. But the prudent course would be to do the work outlined above anyway, so you can use it to show why you got the clean tox results you did on the new compound. That's the only way to feel really sure.
I've had animal rights people make the argument to me that such differences in toxicity prove that animal models are worthless. Untrue, untrue. Without testing on animals, no one would have known that this compound could cause bladder cancer in any species at all. The known differences between humans and various animals can then be used to estimate the risks if the compounds proceeds.
If there were an in vitroway to determine the risk, we'd all be lining up to use it. It would, by definition, be much faster, much cheaper, and much easier to apply earlier in the project before all that time, money, and effort gets wasted. If PETA and their ilk would like to devote themselves to developing such tests, I'll cheer them on.
+ TrackBacks (0) | Category: Animal Testing | Drug Development | Toxicology
July 23, 2002
I've had some mail asking a good (and Frequently Asked) question: how good are the alternatives to animal testing? How close are we to not dosing animals to get toxicology information?
My short answer to the second question is, simultaneously, "A lot closer than we used to be" and "Not very close, for all that." The root of the problem is complexity. Toxicological properties are, to use the trendy word, emergent. You need the whole living system to be sure that you're seeing all there is to see.
You could try to mix and connect cell cultures, where the compound, after being exposed to one type of cell, then flowed off to another, and the original cells got a chance, if they'd been changed, to affect other different cell types. . .and so on. But by the time you got all the connections worked out, you'd have built an animal.
An example of a emergent tox problem is the recent withdrawal by Novo Nordisk of a clinical candidate that they were developing with the Indian company, Dr. Reddy's. Bladder cancer was the problem, seen in long-term dosing. But it's mostly a problem in rats - mice showed enough to notice, but it was the rat data that really set off the sirens.
There aren't a lot of good in vitro methods to predict carcinogenic potential. It's for sure that this compound had been through screens like the well-known Ames test for mutagenicity, for example. If it hadn't passed, it's unlikely that they would have carried the compound as far as they did. (I'll be writing more on the Ames test at a later date.)
Bladder cancer's a bit unusual. Playing the percentages, you'd have to guess that the problem isn't the compound itself, but some metabolite produced in the body which concentrates in the urine. And the rodent differences might suggest that rats produce more of this metabolite than mice do (or, alternatively, that they produce the same one, but that rat bladders are more sensitive to it.) Something like this would be the way to bet.
How much are you willing to bet, though? Are you willing to give people bladder cancer, or even put them at risk for it? (And are you willing to invite some many liability suits to land on you that you'll think it's snowing?) Your chances of getting through (and the chances of your customers!) depend on what the mechanism of the tox might be, and whether it operates in humans, as opposed to rats.
Novo and Dr. Reddy's are certainly going to take their time to thoroughly investigate what the problem might be, and whether it can be fixed. There was really no way to anticipate it without animal testing, though, since we don't have an in vitro system that mimics the bladder. Even if we did, they might have run their compound through it and gotten a green light, if the problem is in fact some later metabolic product. There's no substitute for the whole animal.
+ TrackBacks (0) | Category: Animal Testing | Toxicology
July 21, 2002
I mentioned the other day that I've usually had a good response when I tell people about what I do for a living. There are exceptions, though. A few years ago, my wife and I were walking through a shopping mall, when we were stopped by two scruffy teenage survey takers.
"Would you like to take a - "
" - survey about animal rights?"
That put a new light on things. "Actually, yes. . ."
So we split up, and started in on the questions. Was I familiar with the idea of animal testing? Yes indeed. Did I realize that the medicines I took had been tested on animals? I most certainly did. Was I in favor of this? Damn right I was.
That broke his stride a little bit, but he recovered. What would be my opinion of some medical product if I found out that it had been tested on animals? More favorable. Now my surveyor was bogging down, and he stopped to stare at me. "Well," I said, "I work in the pharmaceutical industry. I'm actually very happy when something I've made gets tested on animals, because that means it's something that might actually work."
I could see him briefly trying, and failing, to integrate that into his worldview. What, um, would my attitude be, er, about this list of products made by companies that had sworn to do no animal testing? My wife and her surveyor had reached this question, by a similar route, and I could hear her starting in on him: "I'm supposed to feel good because they're using stuff that's right out of the National Formulary? Because all the animal testing was done years ago by someone else, these people are more righteous?"
One of my wife's jobs, before we met, was in the lab at a cosmetics company, as fate would have it. Both of the teenagers stared at us, as if we'd pulled off latex masks and reveled ourselves as green-skinned aliens. "Any more questions?" A shaking of heads. We handed them back their lists of the elect and went on our way.
I'd like to think we left them, like Coleridge's wedding guest, sadder and wiser, but I'm sure we didn't. I think we left them wondering if they could just chuck our answers completely, since we were obviously pulling their legs. I mean, what other explanation was there?<
+ TrackBacks (0) | Category: Animal Testing
February 14, 2002
That neuroscience business came up, I guess, because I have a minor background in it. I broke into the drug business doing work on schizophrenia, and followed that with several years on Alzheimer's.
If some of the people in the field read this - well, don't take it the wrong way - but I'd almost as soon have a job breaking concrete with my nose. The central nervous system is a very, very hard area to work in. That's partly because brain function is hideously complex: it's an interesting question whether a human brain even has enough ability to comprehend its own workings. But it's partly because a key part of the drug-testing cascade is often missing.
That's animal testing. (And it really is a key part - eventually I'll get into it with the anti-in vivopeople, and I'll argue that position as long as it takes.) The problem with many central nervous system targets is that the animal models either don't exist, or (even worse) exist but are untrustworthy. That last situation is a killer: the models persist because there is a constituency that believe in their relevance. You'll be running into those folks over and over if you try to do without, and they're going to refuse to believe in your drug candidate unless it's been through the wildebeest swim maze, the platypus tail flick assay, whatever.
The models are so hard because you're often trying to affect behavior that is unique to humans - like remembering phone numbers. Whether a rat can remember not to run into the electrified part of the cage is of doubtful relevance. I think that there are many kinds of memory storage, and I don't believe that rats partake of the kinds that we're most worried about. It's true that there must be common molecular mechanisms for all types of memory (at some level) but messing with those processes indiscriminately (the only way we know how, in many cases) is a recipe for trouble. Let's not even get started on the topic of animal models for schizophrenia.
There's been a lot of progress in Alzheimer's the last two or three years. I enjoy reading about it, and I wish everyone working there all the luck in the world. I may need your compounds some day, guys, so keep banging away. But I'm glad that I'm not having to bang away with you.
+ TrackBacks (0) | Category: Alzheimer's Disease | Animal Testing