Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

August 8, 2014

Mouse Models of Inflammation: Wrong Or Not?

Email This Entry

Posted by Derek

I wrote here about a study that suggested that mice are a poor model for human inflammation. That paper created quite a splash - many research groups had experienced problems in this area, and this work seemed to offer a compelling reason for why that should happen.

Well, let the arguing commence, because now there's another paper out (also in PNAS) that analyzes the same data set and comes to the opposite conclusion. The authors of this new paper are specifically looking at the genes whose expression changed the most in both mice and humans, and they report a very high correlation. (The previous paper looked at the mouse homologs of human genes, among other things).

I'm not enough of a genomics person to say immediately who's correct here. Could they both be right: most gene and pathway changes are different in human and mouse inflammation, but the ones that change the most are mostly the same? But there's something a bit weird in this new paper: the authors report P values that are so vanishingly small that I have trouble believing them. How about ten to the minus thirty-fifth? Have you ever in your life heard of such a thing? In whole-animal biology, yet? That alone makes me wonder what's going on. Seeing a P-value the size of Planck's constant just seems wrong, somehow.

Comments (28) + TrackBacks (0) | Category: Animal Testing

August 4, 2014

Animal Testing in the UK.

Email This Entry

Posted by Derek

A reader sends along news that a minister at the UK's Home Office has made it his goal to completely eliminate animal testing in the country. Norman Baker has been a longtime activist on the issue of animal rights, and is now in a position to do something about it.

Or is he? Reading the article, it seems to me to be one of these "Form a commission to study the proposals for the plan" things. The current proposal is to increase the publicly available details about what animals are being used for:

In a statement, Mr Baker said: "The coalition government is committed to enhancing openness and transparency about the use of animals in scientific research to improve public understanding of this work. It is also a personal priority of mine.

"The consultation on Section 24 of the Animals in Science Act has now concluded and we are currently analysing responses in preparation for pursuing potential legislative change."

So I don't see a ban on animal experimentation in the UK any time soon - which would demolish what's left of the pharma industry there, along with great swaths of the academic biological research world as well. I am not in favor of animal suffering, and would gladly punch anyone who is. But given the state of our knowledge, there really is no alternative in many cases. We shouldn't be doing frivolous experiments, and we should all be mindful of alternatives. But the anti-testing people should realize how few good alternatives there really are.

I've found, by the way, that many activists are convinced that such alternatives are a lot more useful than they really are. When I've had a chance to press them for details, things get hazy very quickly. Phrases like "cell cultures" and "computer models" get thrown around, but how these can substitute for whole-animal disease models and toxicology - that turns out to be not so clear.

Comments (49) + TrackBacks (0) | Category: Animal Testing

April 29, 2014

Mice Hate Men

Email This Entry

Posted by Derek

The difficulty of doing good animal studies has come up here many times, such as the recent suggestion that many rodent facilities should adjust their thermostats.

Now comes word of yet another subtle effect that no one has ever controlled for: mice apparently react different to the scent of human males as compared to human females. Specifically, we guys stress them out more, an effect that shows up in assays of pain and inflammation (and likely many others besides). Here's the paper in Nature Methods, and I think that anyone running rodent studies had better sit down and read it at the first opportunity. There could well be a lot of messed-up data out there, and straightening things out will not be a short job.

Comments (20) + TrackBacks (0) | Category: Animal Testing

April 8, 2014

A Call For Better Mouse Studies

Email This Entry

Posted by Derek

Here's an article by Steve Perrin, at the ALS Therapy Development Institute, and you can tell that he's a pretty frustrated guy. With good reason.
ALS%20chart.png
That chart shows why. Those are attempted replicates of putative ALS drugs, and you can see that there's a bit of a discrepancy here and there. One problem is poorly run mouse studies, and the TDI has been trying to do something about that:

After nearly a decade of validation work, the ALS TDI introduced guidelines that should reduce the number of false positives in preclinical studies and so prevent unwarranted clinical trials. The recommendations, which pertain to other diseases too, include: rigorously assessing animals' physical and biochemical traits in terms of human disease; characterizing when disease symptoms and death occur and being alert to unexpected variation; and creating a mathematical model to aid experimental design, including how many mice must be included in a study. It is astonishing how often such straightforward steps are overlooked. It is hard to find a publication, for example, in which a preclinical animal study is backed by statistical models to minimize experimental noise.

All true, and we'd be a lot better off if such recommendations were followed more often. Crappy animal data is far worse than no animal data at all. But the other part of the problem is that the mouse models of ALS aren't very good:

. . .Mouse models expressing a mutant form of the RNA binding protein TDP43 show hallmark features of ALS: loss of motor neurons, protein aggregation and progressive muscle atrophy.

But further study of these mice revealed key differences. In patients (and in established mouse models), paralysis progresses over time. However, we did not observe this progression in TDP43-mutant mice. Measurements of gait and grip strength showed that their muscle deficits were in fact mild, and post-mortem examination found that the animals died not of progressive muscle atrophy, but of acute bowel obstruction caused by deterioration of smooth muscles in the gut. Although the existing TDP43-mutant mice may be useful for studying drugs' effects on certain disease mechanisms, a drug's ability to extend survival would most probably be irrelevant to people.

A big problem is that the recent emphasis on translational research in academia is going to land many labs right into these problems. As the rest of that Nature article shows, the ways for a mouse study to go wrong are many, various, and subtle. If you don't pay very close attention, and have people who know what to pay attention to, you could be wasting time, money, and animals to generate data that will go on to waste still more of all three. I'd strongly urge anyone doing rodent studies, and especially labs that haven't done or commissioned very many of them before, to read up on these issues in detail. It slows things down, true, and it costs money. But there are worse things.

Comments (19) + TrackBacks (0) | Category: Animal Testing | The Central Nervous System

December 18, 2013

Lab Mice Are Being Kept Too Cold, Apparently

Email This Entry

Posted by Derek

Earlier, the unsuitability of mice in inflammation models was shown in a paper that should have been noted by anyone in the field. Just last month, a paper in Science detailed the problems with many animal studies (mouse and otherwise), particularly the smaller ones, which can suffer from bad statistics and poor protocols.

Now we have this, from PNAS. The authors, from the Roswell Park Institute and the EPA, say that standard rodent facility conditions are actually causing unintended chronic physiological stress:

We show here that fundamental aspects of antitumor immunity in mice are significantly influenced by ambient housing temperature. Standard housing temperature for laboratory mice in research facilities is mandated to be between 20–26 °C; however, these subthermoneutral temperatures cause mild chronic cold stress, ac- tivating thermogenesis to maintain normal body temperature. When stress is alleviated by housing at thermoneutral ambient temperature (30–31 °C), we observe a striking reduction in tumor formation, growth rate and metastasis. . .Overall, our data raise the hypothesis that suppression of antitumor immunity is an outcome of cold stress-induced thermo- genesis. Therefore, the common approach of studying immunity against tumors in mice housed only at standard room temperature may be limiting our understanding of the full potential of the antitumor immune response.

As mentioned in that last line, the problem seems to be with the adaptive immune system - this effect is driven by CD8+ T cells in almost every case, and sometimes by changes in CD4+ cells as well. Overall, housing mice at the recommended temperatures, which are on the cool side, seems to promote a general immunosuppression, which I think it's safe to say is not a factor that many people are taking into account. The animals have similar core body temperatures, but the extra burden of maintaining that in the cooler rooms is tipping some sort of balance - keeping all those immune systems running is apparently energetically costly, and they get downregulated.

This study looked at several sorts of tumorigenesis, but only for solid tumors, so the effects on leukemia, etc., are still unknown. You'd have to think, though, that several other disease areas could be affected by this situation as well - for example, how much of the uselessness of mice in inflammation models is caused by these changes? I'm simultaneously glad to see these things being uncovered, while being worried about how long it's taken to uncover them: what else are we missing?

Comments (32) + TrackBacks (0) | Category: Animal Testing | Cancer

November 26, 2013

Of Mice (Studies) and Men

Email This Entry

Posted by Derek

Here's an article from Science on the problems with mouse models of disease.

or years, researchers, pharmaceutical companies, drug regulators, and even the general public have lamented how rarely therapies that cure animals do much of anything for humans. Much attention has focused on whether mice with different diseases accurately reflect what happens in sick people. But Dirnagl and some others suggest there's another equally acute problem. Many animal studies are poorly done, they say, and if conducted with greater rigor they'd be a much more reliable predictor of human biology.

The problem is that the rigor of animal studies varies widely. There are, of course, plenty of well-thought-out, well-controlled ones. But there are also a lot of studies with sample sizes that are far too small, that are poorly randomized, unblinded, etc. As the article mentions (just to give one example), sticking your gloved hand into the cage and pulling out the first mouse you can grab is not an appropriate randomization technique. They aren't lottery balls - although some of the badly run studies might as well have used those instead.

After lots of agitating and conversation within the National Institutes of Health (NIH), in the summer of 2012 [Shai] Silberberg and some allies went outside it, convening a workshop in downtown Washington, D.C. Among the attendees were journal editors, whom he considers critical to raising standards of animal research. "Initially there was a lot of finger-pointing," he says. "The editors are responsible, the reviewers are responsible, funding agencies are responsible. At the end of the day we said, 'Look, it's everyone's responsibility, can we agree on some core set of issues that need to be reported' " in animal research?

In the months since then, there's been measurable progress. The scrutiny of animal studies is one piece of an NIH effort to improve openness and reproducibility in all the science it funds. Several institutes are beginning to pilot new approaches to grant review. For an application based on animal results, this might mean requiring that the previous work describe whether blinding, randomization, and calculations about sample size were considered to minimize the risk of bias. . .

Not everyone thinks that these new rules are going to work, though, or are even the right way to approach the problem:

Some in the field consider such requirements uncalled for. "I am not pessimistic enough to believe that the entire scientific community is obfuscating results, or that there's a systematic bias," says Joseph Bass, who studies mouse models of obesity and diabetes at Northwestern University in Chicago, Illinois. Although Bass agrees that mouse studies often aren't reproducible—a problem he takes seriously—he believes that's not primarily because of statistics. Rather, he suggests the reasons vary by field, even by experiment. For example, results in Bass's area, metabolism, can be affected by temperature, to which animals are acutely sensitive. They can also be skewed if a genetic manipulation causes a side effect late in life, and researchers try to use older mice to replicate an effect observed in young animals. Applying blanket requirements across all of animal research, he argues, isn't realistic.

I think, though, that there must be some minimum requirements that could be usefully set, even with every field having its own peculiarities. After all, the same variables that Bass mentions above - which are most certainly real ones - could affect studies in completely different fields. This, of course, is one of the biggest reasons that drug companies restrict access to their animal facilities. There's always a separate system to open those doors, and if you don't have the card to do it, you're not supposed to be in there. Pace the animal rights activists, that's not because it's so terrible in there that the rest of us wouldn't be able to take it. It's because they don't want anyone coming in there and turning on lights, slamming doors, sneezing, or doing any of four dozen less obvious things that could screw up the data. This stuff is expensive, and it can be ruined quite easily. It's like waiting for a four-week-long soufflé to rise.

That brings up another question - how do the animal studies done in industry compare to those done in academia? The Science article mentions some work done recently by Lisa Bero of UCSF. She was looking at animal studies on the effects of statins, and found, actually, that industry-sponsored research was less likely to find that the drug under investigation was beneficial. The explanation she advanced is a perfectly good one: if your animal study is going to lead you to spend the big money in the clinic, you want to be quite sure that you can believe the data. That's not to say that there aren't animal studies in the drug industry that could be (or could have been) run better. It's just that there are, perhaps, more incentives to make sure that the answer is right, rather than just being interesting and publishable.

Doesn't the same reasoning apply to human studies? It certainly should. The main complicating factor I can think of is that once a company, particularly a smaller one, has made the big leap into human clinical trials, it also has an incentive to find something that's good enough to keep going with, and/or good enough to attract more investment. So perverse incentives are, I'd guess, more of a problem once you get to human trials, because it's such a make-or-break situation. People are probably more willing to get the bad news from an animal study and just groan and say "Oh well, let's try something else". Saying that after an unsuccessful Phase II trial is something else again, and takes a bit more sang-froid than most of us have available. (And, in fact, Bero's previous work on human trials of statins seems to show various forms of bias at work, although publication bias is surely not the least of them).

Comments (37) + TrackBacks (0) | Category: Animal Testing

February 13, 2013

Mouse Models of Inflammation Are Basically Worthless. Now We Know.

Email This Entry

Posted by Derek

We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.

But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.

These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.

Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.

What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.

And there are a lot of potential reasons why this turns out to be so:

There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.

But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.

There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):

The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.

The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.

The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.

“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”

“That started us thinking,” he continued. “Is it the same in the mouse or not?”

What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.

We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.

Comments (50) + TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases

November 1, 2012

Lab Animals Wiped Out in Hurricane Sandy

Email This Entry

Posted by Derek

When I mentioned the people working in the research animal facilities before Hurricane Sandy, I had no idea that this was going to happen: thousands of genetically engineered and/or specially bred rodents were lost from an NYU facility due to flooding. The Fishell lab appears to have lost its entire stock of 2,500 mice, representing 10 years of work. Very bad news indeed for the people whose careers were depending on these.

Comments (34) + TrackBacks (0) | Category: Animal Testing | Current Events

November 22, 2011

The Mouse Trap

Email This Entry

Posted by Derek

If you haven't seen it, this series by Daniel Engber at Slate, on the use of the mouse as a laboratory workhorse, is excellent. (And I'm not just saying that because he references some of my disparaging comments about xenograft models, although that did give me a chance to teach my kids what the word "acerbic" means).

He has a lot of good points, which will resonate with people who do research (and inform those who don't). For example, writing on the ubiquity of C57 black mice, he asks:

So one dark-brown lab mouse came to stand in for every other lab mouse, just as the inbred lab mouse came to stand in for every other rodent, and the rodent came to stand in for dogs and cats and rabbits and rhesus monkeys, the standard models that themselves stood in for all Animalia. But where is Black-6 taking us? How much can we learn from a single mouse?

A lot - but enough? That's always the background question with animal models. My take has long been that they're tricky, not always reliable, and still, infuriatingly, essential. The problem is that even things like xenograft models are terrible only on the absolute scale. On the relative scale - compared to all the other animal models for new oncology drugs - they're pretty good. And compared to not putting your drugs into an animal at all before going to humans, well. . .

Comments (18) + TrackBacks (0) | Category: Animal Testing | Cancer

January 17, 2011

Reboxetine Doesn't Work. But That's Not the Real Problem.

Email This Entry

Posted by Derek

Some time ago, I took nominations for Least Useful Animal Models. There were a number of good candidates, many of them from the CNS field. A recent report makes me think that these are even stronger contenders than I thought.

The antidepressant reboxetine (not approved in the US, but sold in a number of other countries by Pfizer) was recently characterized by a German meta-analysis of the clinical data as "ineffective and potentially harmful". Its benefits versus placebo (and SSRI drugs) have been overestimated, and its potential for harm underestimated. It was approved in Europe in 1997, and provisionally by the FDA in 1999, although that was later rolled back when more studies came in that showed lack of efficacy.

Much has been made of the fact that Pfizer had not published many of the studies they conducted on the drug. These do seem, however, to have been available to regulatory authorities, and were the basis for the FDA's decision not to grant full approval. As that BMJ link discusses, though, there's often not a clear pathway, especially in the EU, for a regulatory agency to go back and re-examine a previous decision based on efficacy (as opposed to safety).

So the European regulatory agencies can be faulted for not revisiting their decision on this drug in a better (and quicker) fashion, and Pfizer can certainly be faulted for letting things stand (in the face of evidence that the drug was not effective). All this is worrisome, but these are problems that are being dealt with. Since 2007, for example, trials for the FDA have been required to be posted at clinicaltrials.gov, although the nontranparency of older data can make it hard to compare newer and older treatments in the same area.

What's not being dealt with as well is an underlying scientific problem. As this piece over at Scientific American makes plain, reboxetine, although clinically ineffective, works just fine in all the animal models:

And this is a rough moment for scientists studying depression. Why? Because reboxetine works beautifully in our animal models. It’s practically a poster-child antidepressant. It produces acute effects in tests such as forced-swim tests and tail-suspension tests (which use changes in struggle as a measure of antidepressant efficacy). It produces neurogenesis in the hippocampus, which is thought to be correlated with antidepressant effects. When behavioral pharmacologists are doing comparisons between older antidepressants and newer ones, reboxetine is often used as a positive control, a drug known to have an effect in the behavioral test of choice.

But it doesn’t work in patients. And patients are what matters. Now, scientists are stuck with a difficult question: What went wrong?

A very good question, and one without any very good answers. And this certainly isn't the first CNS drug to show animal model efficacy but do little good in people. So, how much is the state of the art advancing? Are we getting anywhere, or just doing the same old thing?

Comments (49) + TrackBacks (0) | Category: Animal Testing | Clinical Trials | Regulatory Affairs | The Central Nervous System | The Dark Side

August 16, 2010

Cancer Cells: Too Unstable For Fine Targeting?

Email This Entry

Posted by Derek

The topic of new drugs for cancer has come up repeatedly around here - and naturally enough, considering how big a focus it is for the industry. Most forms of cancer are the very definition of "unmet medical need", and the field has plenty of possible drug targets to address.

But we've been addressing many of them in recent years, with incremental (but only rarely dramatic) progress. It's quite possible that this is what we're going to see - small improvements that gradually add up, with no big leaps. If the alternative is no improvement at all, I'll gladly take that. But some other therapeutic areas have perhaps made us expect more. Infectious disease, for example: the early antibiotics looked like magic, as patients that everyone fully expected to die started asking when dinner was and when they could go home. That's what everyone wants to see, in every disease, and having seen it (even fleetingly), we all want to have it happen again.

And it has happened for a few tumor types, most notably childhood leukemia. But we definitely need to add more to the list, and it's been a frustrating business. Believe me, it's not like we in the business aiming for incremental improvements, a few weeks or months here and there. Every time we go after a new target in oncology, we hope that this one is going to be - for some sort of cancer - the thing that completely knocks it down.

We may be thinking about this the wrong way, though. For many years now, there have been people looking at genetic instability in tumor cells. (See this post from 2002 - yes, this blog has been around that long!) If this is a major component of the cancerous phenotype, it means that we could well have trouble with a target-by-target approach. (See this post by Robert Langreth at Forbes for a more recent take). And here's a PubMed search - as you can see, there's a lot of literature in this field, and a fair amount of controversy, too.

That would, in fact, mean that cancer shares something with infectious disease, and not, unfortunately, the era of the 1940s when the bacteria hadn't figured out what we could do to them yet. No, what it might mean is that many tumors might be made of such heterogeneous, constantly mutating cells that no one targeted approach will have a good chance of knocking them down sufficiently. Since that's exactly what we see, this is a hypothesis worth taking seriously.

There are other implications for drug discovery. Anyone who's worked in oncology knows that the animal tumor models we tend to use - xenografts of human cell lines - are not particularly predictive of success. "Necessary but nowhere near sufficient" is about as far as I'd be willing to go. Could that be because these cells, however vigorously they grow, have lost (or never had) that rogue instability that makes the wild-type tumors so hard to fight? I haven't seen a study of genetic instability in these tumor lines, but it would be worth checking.

What we might need, then, are better animal models to start with - here's a review on some efforts to find them. From a drug discovery perspective, we might want to spend more time on oncology targets that work outside the cancer cells themselves. And clinically, we might want to spend more time studying combinations of agents right from the start, and less on single-drug-versus-standard-of-care studies. The disadvantage there is that it can be hard to know where to start - but we need to weigh that against the chances of a single agent actually working

Comments (49) + TrackBacks (0) | Category: Animal Testing | Cancer | Clinical Trials | Drug Development

April 26, 2010

Charles River Buys WuXi

Email This Entry

Posted by Derek

I don't think we saw this one coming: Charles River Labs has announced that they're buying WuXi PharmaTech. They're paying about a 28% premium over Friday's closing stock price - Charles River's CEO will stay on, and WuXi's founder (Li Ge) will serve as executive VP under him.

Charles River, which is strong in the animal-testing end of the business, has apparently decided that Wu Xi is one of their biggest competitors (I'd agree) and has decided to try to stake out a leading position in the whole contract-research space. It's interesting to me that the folks at Wu Xi bought into this reasoning as well, although (since they're a publicly traded company here in the US), a lucrative stock offer can be its own argument. One now wonders, though, about the company's statements on re-staffing some of their US labs when economic conditions improve. . .

Comments (15) + TrackBacks (0) | Category: Animal Testing | Business and Markets | Drug Assays | Drug Development

March 30, 2010

Animal Studies: Are Too Many Never Published At All?

Email This Entry

Posted by Derek

A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you're wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.

I hadn't seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn't symmetrical - and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn't appear to be just due to less-precise studies tending to show positive effects for some reason - it strongly suggests that there are negative studies that just haven't been reported.

The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.

The loss of experiments that showed no effect shouldn't surprise anyone - after all, it's long been known that publishing such papers is just plain harder than publishing ones that show something happening. There's an obvious industry bias toward only showing positive data, but there's an academic one, too, which affects basic research results. As the authors put it:

These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.

And there's an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):

The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.

This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it's clear that this can't happen if the results are just sitting on someone's hard drive. But it's also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing's for free.

Comments (19) + TrackBacks (0) | Category: Animal Testing | Cardiovascular Disease | Clinical Trials | Drug Assays | The Scientific Literature

March 3, 2010

Fat Rats Make Poor Test Subjects?

Email This Entry

Posted by Derek

Well, here's a brow-furrowing paper, courtesy of PNAS. Th authors, from the National Institute on Aging, contend that most laboratory rodents are overfed, under-stimulated, and are (to use their phrase) "metabolically morbid". This affects their suitability as control and experimental animals for a wide variety of assays.

There seem to be effects across the board - the immune system, glucose and lipid handling, cardiovascular numbers, susceptibility to tumors, cognitive performance. The list is a long one, and the route causes seem to be ad libitum feeding and lack of exercise. The beneficial effects of some drugs in rodent models, the authors propose, could be due (at least in part) to their ability to reverse the artificial conditions that the animals are maintained under, and the application of these results to the real world could be doubtful. (The same concerns don't apply nearly as much to larger animals such as dogs and primates. They're handled differently, and their physiologies don't seem to be altered, or at least nowhere near as much).

Of course, some people live similar lifestyles, as far as the lack of activity and ad libitum feeding goes, so I have to wonder about the rodents being better test animals than one might wish for. But overall, this seems like a useful wake-up call to the animal testing community, especially in some therapeutic areas. On a domestic level, I'm thinking through the implications of this for the two guinea pigs my children have - they seem to sit around and eat all the time. The guinea pigs, I mean, not the kids.

Comments (9) + TrackBacks (0) | Category: Animal Testing

August 13, 2009

Animal Testing: A View From the Labs

Email This Entry

Posted by Derek

Why do we test drugs on animals, anyway? This question showed up in the comments section from a lay reader. It's definitely a fair thing to ask, and you'd expect that we in the business would have a good answer. So here it is: because for all we know about biochemistry, about physiology and about biology in general, living systems are still far too complex for us to model. We're more ignorant than we seem to be. The only way we can find out what will happen if we give a new compound to a living creature is to give it to some of them and watch carefully.

That sounds primitive, and I suppose it is. We don't do it in a primitive manner, though. We watch with all the tools of our trade - remote-control physiological radio transmitters, motion-sensing software hooked up to video cameras, sensitive mass spectrometry analysis of blood, of urine, and whatever else, painstaking microscopic inspection of tissue samples, whatever we can bring to bear. But in the end, it all comes down to dosing animals and waiting to see what happens. That principle hasn't changed in decades, just the technology we use to do it.

No isolated enzymes can yet serve as a model for what can happen in a single real cell. And no culture of cells can recapitulate what goes on in a real organism. The signaling, the feedback loops, the interconnectedness of these systems is (so far) too much for us to handle. We keep discovering new pathways all the time, things that no model would have included because we didn't even know that they were there. The end is not yet in sight, occasional newspaper headlines to the contrary.

We do use all those things as filters before a compound even sees its first rodent. In a target-driven approach, which is the great majority of the industry, if a compound doesn't work on an isolated protein, it doesn't go on to the cell assay. If it doesn't work on the cells, it doesn't go on to animals. (And if it kills cells, it most certainly doesn't go on to the animals, unless it's some blunderbuss oncology agent of the old school). The great majority of compounds made in this business have never been given to so much as one mouse, and never will.

So what are we looking for when we finally do dose animals? We're waiting to see if the compound has the effect we're hoping for, first off. Does it lower blood pressure, slow or stop the growth of tumors, or cure viral infections? Doing these things requires having sick animals, of course. But we also give the drug candidates to healthy ones, at higher doses and for longer periods of time, in order to see what else the compounds might do that we don't expect. Most of those effects are bad - I'd casually estimate 99% of the time, anyway - and many of them will stop a drug candidate from ever being developed. The more severe the toxic effect, the greater the chance that it's based on some fundamental mechanism that will be common to all animals. In some cases we can identify what's causing the trouble, once we've seen it, and once in a great while we can use that information to argue that we can keep going, that humans wouldn't be at the same risk. But this is very rare - we generally don't know enough to make a persuasive case. If your compound kills mice or kills rats, your compound is dead, too.

I've lost count of the number of compounds I've worked on that have been pulled due to toxicity concerns; suffice it to say that it's a very common thing. Every time it's been something different, and it's often not for any of the reasons I feared beforehand. I've often said here that if you don't hold your breath when your drug candidate goes into its first two-week tox testing, then you haven't been doing this stuff long enough.

Here's the problem: giving new chemicals to animals to see if they get sick (and making animals sick so that we can see if they get better) are not things that are directly compatible with trying to keep animals from suffering. Ideally, we would want to do neither of those things. Fortunately, several factors all line up in the same direction to keep things moving toward that.

For one thing, animal testing is quite expensive. Only human testing is costlier. In this case, ethical concerns and capitalist principles manage to line up very well indeed. Doing assays in vitro is almost invariably faster and cheaper, so whenever we can confidently replace a direct animal observation with an assay on a dish, plate, or chip, we do. All that equipment I mentioned above has also cut down on the number of animals needed, and that trend is expected to continue as our measurements become more sensitive.

So things are lined up in the right direction. Any company that found a reliable way to eliminate any significant part of its animal testing would immediately find itself in a better competitive position.

And for the existing tests, it's also fortunate that unhappy animals give poor data. We want to observe them under the most normal conditions possible, not with stress hormones running through their systems, and a great deal of time and trouble (and money) goes toward that end. (In this case, it's scientific principles that line up with ethical ones). Diseased animals are clearly going to be in worse shape than normal ones, but in these situation, too, we try to minimize all the other factors so we're getting as clear a read as possible on changes in the disease itself.

So that's my answer: we use animals because we have (as yet) no alternative. And our animal assays prove that to us over and over by surprising us with things we didn't know, and that we would have had no other opportunity to learn. We'd very much like to be able to do things differently, since "differently" would surely mean "faster and more cheaply". None of us enjoy it when our compounds sicken healthy animals, or have no effect on sick ones. Just the wasted time and effort alone is enough to make any drug discoverer think so. There are billions of dollars waiting to be picked up by anyone who finds a better way.

Comments (82) + TrackBacks (0) | Category: Animal Testing | Pharma 101

August 11, 2009

Animal Rights, You Say?

Email This Entry

Posted by Derek

Novartis has had trouble for years with animal rights activists, and now things are getting nastier than ever:

Novartis CEO Daniel Vasella says the people who burned down his holiday home and defiled his family's graves are not criminals but "terrorists" beyond dialogue.

In an interview with the SonntagsBlick newspaper, the 55-year-old chief executive said the attacks have changed his life and that more needs to be done to rein in the animal-rights extremists believed responsible for the "wicked" acts.

Last week Vasella's home in Austria was set on fire. In July his mother's urn was stolen and his dead 19-year-old sister's grave was desecrated. Crosses bearing his name and that of his wife were placed in a Chur cemetery. Workers' cars have been torched and angry graffiti sprayed on walls. . .

"How far do things have to go before you can speak of terrorism?" Vasella told the newspaper.

I'd say that's far enough, definitely. If that's not being done with intent to terrorize, then what? One idiotic part of the whole business is that the protesters seem to be trying to get Novartis to stop working with Huntingdon Life Sciences, the British animal testing company. (Similar tactics have been used elsewhere). But Novartis says that they currently have no relationship at all with HLS, and haven't for several years.

Mere statements of dull fact, though, won't make a dent in the self-righteousness of the sorts of people who think that spray-painting gravestones is a blow for justice.

Comments (27) + TrackBacks (0) | Category: Animal Testing | Why Everyone Loves Us