About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« More Belts Tighten | Main | Med-Chem Layoffs, On the Front Page »

December 11, 2007

A Bad Assay: Better Than None?

Email This Entry

Posted by Derek

Man, do we ever have a lot of assays in this business. Almost every drug development project has a long list of them, arranged in what we call a screening cascade. You check to make sure that your new molecule hits your protein target, then you try it on one or more living cell lines. There are assays to check its potency against related targets (some of which you may want, most of which you don’t), and assays to measure the properties of the compound itself, like how well it dissolves. Then it’s on to blood levels in animals, and finally to a disease model in some species or another.

Not all these assays are of equal importance, naturally. And not all of them do what they’re supposed to do for you. Some processes are so poorly understood that we’re willing to try all sorts of stuff to get a read on them. I would put the Caco-2 assay firmly in that category.

Caco ("cake-o")-2 cells are a human colon cancer cell line. When you grow them in a monolayer, they still remember to form an “inside” and an “outside” – the two sides of the layer act differently, and they pump compounds across from one side to the other. This sort of active transport is very widespread in living systems, and it’s very important in drug absorption and distribution, and from a practical standpoint we don’t know much about it at all. Membranes like the gut wall or the lining of the brain’s blood vessels do this sort of thing all the time, and pump out things they don’t like. Cancer cells and bacteria do it to compounds they judge to be noxious, which covers a lot of the things we try to use to kill them. Knowing how to avoid this kind of thing would be worth billions of dollars, and would give us a lot more effective drugs.

The Caco-2 cell assay is an attempt to model some of this process in a dish, so you don’t have to find out about it in a mouse (or a human). You put a test amount of your compound on one side of the layer of cells, and see how much of it gets through to the other side – then you try it in reverse, to see how much of that flow was active transport and how much was just passive leak-through diffusion. The ratio between those two amounts is supposed to give you a read on how much of a substrate your compound is for these efflux pumps, particularly a widespread one called P-glycoprotein.

I have seen examples in the literature where this assay appears to have given useful data. Unfortunately, as far as I can remember, I cannot recall ever having participated in such a project. Every time I’ve worked with Caco-2 data, it’s been a spread of numbers that didn’t correlate well with gut absorption, didn’t correlate well with brain levels, and didn’t help to prioritize anything. That may be unfair – after all, I’ve had people tell me that ‘s worked out for them – but I think that even in those cases people had to run quite a few compounds through before they believed that the assay was really telling them something. The published data on these things can turn out to be a small, shiny heap on the summit of a vast pile of compost - the unimpressive or uninterpretable attempts that never show up in any journal, anywhere.

You can think of several reasons for these difficulties, and there are surely more that none of us have thought of yet. These are colon cells, not cells from the small intestine (where the great majority of absorption takes place) or from the blood-brain barrier. They're from a carcinoma line, not a normal population (which is why they're still happily living in dishes). But that means that they’re far removed from their origins, to boot. (It’s well known that many cell lines lose some of their characteristics and abilities as you culture them. They’re not getting the stimuli they were in their native environment, and they shed functions and pathways as they’re no longer being called for). There’s also the problem that they’re human cells, but they’re often used to correlate with data from rodent models. Our major features overlap pretty well (most mouse poisons are human poisons, for example), but the fine details can be difficult to line up.

But people still run the Caco-2 assay. I think that now it’s mostly done in the hope, mostly forlorn, that this time it’ll turn out to model something crucial to this particular drug series. A representative list of compounds that have already been through the pharmacokinetic studies is tried, and the results are graphed against the blood levels. And, for the most part, the plots look like soup thrown against a wall – again. The quest to explain these things continues. . .

Comments (21) + TrackBacks (0) | Category: Drug Assays | Drug Development


1. Kay on December 11, 2007 11:09 AM writes...

We are fine. We work hard, we are smart, and we never close down assays because of bad data. The Wall Street Journal just does not understand our business. The WSJ should pay attention to industries that have real problems instead of picking on us.

Permalink to Comment

2. AJA on December 11, 2007 11:15 AM writes...

I have been working in the pharma industry with the Caco-2 assay for over eight years now. I started the assay at "the wonder factory", where the chemists preferred it over the much quicker MDR1-MDCK cell-line, which is (1.) a dog-derived cell-line, (2.) a kidney cell-line.

For all the problems/reasons you noted, we were about to trash it, when the FDA came out with a new drug interaction guidance document this past September 2006 (due to be finalized any day now). []

It turns out that the FDA thinks it is the best/most relevant way to determine gut permeability and PGP involvement. So like it or not, the FDA like Caco and MDCK, and they will be around a while longer. While the document is not yet finalized, they want PGP drug interaction studies to be determined in a valid cell line -- either MDCK or Caco-2.

In its defense, the Caco-2 cell line does develop nice tight junctions and expresses high levels of the PGP efflux transporter (and unfortunately other transporters). Both characteristics make it a somewhat useful tool for characterizing compounds.

Permalink to Comment

3. Kay on December 11, 2007 11:18 AM writes...

We are fine. We work hard, we are smart, and we never close down assays because of bad data. The Wall Street Journal just does not understand our business. The WSJ should pay attention to industries that have real problems instead of picking on us.

Permalink to Comment

4. Analytical Scientist on December 11, 2007 12:21 PM writes...

Later in development (where I work), caco-2 is a very valuable benchmark of intestinal permeability in the context of the FDA's Biopharmaceutical Classification System. It's a powerful conceptual guide to the design of good formulations.

Permalink to Comment

5. Rich Apodaca on December 11, 2007 12:32 PM writes...

Keeping a bad assay running is the low-stress, comfortable path in the short run, but a disaster in the long run.

The case against Caco-2 could also be made against liver microsomal stability, Ames, hERG, CYP, and a host of other assays, depending on how they're run and the question being asked.

Getting the number back is just the beginning.

Permalink to Comment

6. Dalida on December 11, 2007 1:46 PM writes...

I've seen some papers on a PAMPA based assay for passive membrane permeability. Heard that these are not reliable. Wondering if the most succesful drugs are derived from those with high passive permeability? Obviously even if their 'pumped' out of the cell, they can still diffuse back in. Still I hear more about CACO as being the bench standard.

Permalink to Comment

7. MikeEast on December 11, 2007 3:30 PM writes...

Rich - amen, brother.

Many of these assays only give us the illusion of knowledge.

Permalink to Comment

8. Russ on December 11, 2007 4:44 PM writes...

Yet another example of looking for something under a streetlight - not because it's there, but because that's where you can see.

Permalink to Comment

9. Polymer Bound on December 11, 2007 9:22 PM writes...

I like these assays. I don't think there's a good correlation with bioavailability because there are so many other issues, besides passive permeability and transport, which can affect it. Some major factors that are sometimes difficult to consider are physical properties of the compounds and formulation. If you plot permeability vs. bioavailability or brain/plasma ratio, you aren't going to get a line, but I imagine if you remove compounds with poor physical properties (as we med chemists are skilled at making), you probably would.

In practice, we use the assay to weed out compounds to push forward into a terminal experiment in rodents, but not to rank order them. Exceptions occur, but in general we find it useful. And much higher throughput than in vivo experiments.

The business of drug discovery is a multivariable problem, and that equation is very difficult to solve... so much so that it's a high profile paper if one spots a pretty good trend.

The bioavailability problem is a really interesting one on its own. I wonder how many perfectly good compounds have been thrown out based on in vivo data that could have been salvaged with a different salt form/formulation. I've seen the formulation folks work miracles on a turd of a clinical candidate, although I imagine it would be a huge resource sink to get them involved in early development PK experiments.

Kay: I don't think the problem is the assay... it's the interpreter of the data. We humans have a tendency to oversimplify.

Dalida: I think it's a matter of in/out -- compounds with a high "in" rate can overwhelm the "out" pump.

Permalink to Comment

10. Kay on December 12, 2007 7:43 AM writes...

Derek: thank you for bringing this to the surface in such an elegant fashion. I don't see anyone above defending this and related models (Polymer Bound only appears to like them because they are easy, not because they predict). It's important for the investment community to realize that we continue to use things that we know are not predictive.

If our co-workers could also admit to the fog of ignorance we play in, then our industry may gain the perspective and rigor needed to pull out of our shared tailspin.

Permalink to Comment

11. MTK on December 12, 2007 8:32 AM writes...


I'll disagree with your assessment that no one defended the assay. Polymer Bound thinks it's useful, more than just easy, Analytical deemed it valuable in the proper setting, and AJA even pointed out some good features. Finally, and literally the final word in many respects, the FDA thinks it has some value.

Not speaking specifically to Caco-2, but to assays in general, how predictive do you want them to be? The answer if always, 100%, but that's not realistic. The actual answer is that depends on where in drug discovery/development cycle the assay is used. An assay or test of something already in the clinic I would want it to be pretty darn predictive. Something in a high throughput, hit-to-lead, setting? Eh. At that point the assay should be viewed more as a filter than necessarily a predictive tool.

So, if you use the assay to keep progressing certain compounds while dropping others, it has value. Even something with relatively low predictive value, if it's used properly can speed the overall process. For example, let's say 50% of the compounds that look good in assay X later flame out due to the very thing assay X is supposed to test for. Arrrgh. That's terrible. Why are we doing it?! However, if 90% of those that looked bad in assay X would have flamed out also, well, that doesn't look too bad now does it, if it kept you from chasing down something with a much lower chance of success.

Permalink to Comment

12. Kay on December 13, 2007 8:01 AM writes...

MTK, your low-science approach is contributing to the demise of our industry. Non-limiting examples: 1) the FDA guidance is not related at all to discovery, it is only focused on drug-drug interaction potential, and FDA admits that clinical relevance of the data is not understood; 2) you are using assays that have not been validated with either a blinded or prospective study design. You apparently have never been subjected to the scorn associated with presenting non-blinded and non-prospective data. Stay where you are, it must be comfortable.

Permalink to Comment

13. MTK on December 13, 2007 2:13 PM writes...

Thanks for the advice Kay. I'll keep it in mind.

As for my "low-science" approach I thought I had made it very clear that what I was advocating was that things have their place. In fact I said the FDA sees "some" value. That PB and AJA pointed out some good things about Caco-2. This is a far cry from your statement that no one was defending it.

Let me give you another "low-science" example. Actually make it "no science". I have a car. It's not the best car, it can't haul a lot, it doesn't go fast. But that's not what I use it for. I use it to get me to work and back. That's all. As long as I understand that, it has value. I'm not trying to win the Daytona 500 here.

Most things in life are a compromise. So if what's important at a certain time in the discovery/development cycle is throughput then you may have to sacrifice some other things. That means you accept the shortcomings in exchange for some other things. It also means that you use other methods at the appropriate time to fill in those gaps. I would argue that it is not "low science", it's smart science. When you're looking at a new synthesis, you don't set up a reaction at a time and then try to optimize it. You set up as many different reactions as you can, screen them, then optimize the most promising one. That's all I'm saying about early assays.

I hope that makes things clearer.

Permalink to Comment

14. anon on December 13, 2007 4:23 PM writes...

And may be somebody will be so kind to make available this article? I can read it after 11 months at best!

Permalink to Comment

15. Polymer Bound on December 13, 2007 11:23 PM writes...

Clearly we have a difference of perspective, but I find it amusing that you'd insult the way we discover drugs.

It's a tool. Like logP, solubility, plasma protein binding, hERG assays, hepatocyte incubations, and a million other things that may or may not find useful in drug discovery. If your assay numbers don't give you what you want, it's the responsibility of the project team to trump that data with something in vivo.

I'm growing tired of people saying that pharma doesn't know what it's doing when they don't have any answers of what we -should- do. I think substituting validated in vitro assays for in vivo ones is the right thing to do, and if you have constructive criticism to suggest otherwise, I'd be happy to listen.

Permalink to Comment

16. Kay on December 14, 2007 7:16 AM writes...

The divergence is simple: Derek is more likely than average to admit that the emperor has no clothes. If more reports contained caveats such as "this assay has not been validated versus clinical data" or "it is well established that animal PK does not model human PK a priori" or "this rule is known to generate large numbers of false negatives, so its use may cause financial damage" then it would be clear/transparent/fair to management.

Use of assays that do not predict in-human characteristics causes us to run in circles while employed and to increase our chances of becoming unemployed.

Permalink to Comment

17. MTK on December 14, 2007 9:06 AM writes...

Polymer Bound,

I assume you are addressing Kay in comment #15, yes?

Permalink to Comment

18. Bucky on December 14, 2007 5:16 PM writes...

With regards to caco-2, PAMPA, I've had mixed results at best with these assays. As mentioned by someone else earlier, the point at which data from these assays is used in the discovery continuum is a critical consideration. I still think in lead compound identification and refinement they (ie. CYP, herg, logP) can be very helpful in prioritizing synthetic efforts.

As for some of the other comments about assays that don't predict in-human characteristics... the only true way to know if something works in humans, is to test in humans. And even then, you're probably going to experience variability in response.

Right now, the industry is literally synthesizing 1000s of molecules just to get to Phase I. If you can use assays like the aforementioned that offer at least moderate probabilities of reducing that number, you stand to see more compounds entering the clinic in a more productive manner. I think our industry does have real productivity problems but am optimistic that things will improve. Clearly, expanding the use of translatable, disease-relevant endpoints in preclinical discovery & development is important, - in fact I think this is why the FDA is trying to promote the Critical Path Initiative, isn't it? This isn't something that's going to improve overnight, but as more of an emphasis is placed on it, it seems like discovery efforts will become more productive when used in conjunction with existing prioritization strategies.

Permalink to Comment

19. TFox on December 14, 2007 5:18 PM writes...

Y'know, last time I looked at Caco-2 prediction of fraction absorbed, the correlations were rough, and the in vitro data looked awful... until you plotted the error bars on the in vivo data. You have no hope of predicting something accurately if you can't measure it accurately, no matter how perfect your predictor is.

Permalink to Comment

20. Polymer Bound on December 14, 2007 8:14 PM writes...

MTK: yep.

Kay: Of the many reasons people get laid off, I don't think any of them have to do with any given assay.

Did I or anyone give the impression that Caco is completely predictive of the PK profile of a compound? Maybe I'm just lucky, but where I work we discuss the caveats and shortcomings of assays all the time.

--"this assay has not been validated versus clinical data" or "it is well established that animal PK does not model human PK a priori" or "this rule is known to generate large numbers of false negatives, so its use may cause financial damage"--

If your management doesn't know any one of these things, you're working for the wrong company. Leave now.

Permalink to Comment

21. Kay on December 16, 2007 6:26 AM writes...

Our industry is imploding because we are not generating enough new and useful products. All of the myriad problems (generics, worst relationship with the public that money can buy, etc) could be ignored if sufficient cash could be viewed on the horizon.

We cannot feed ourselves in part because too many workers generate/believe/discuss/derive income from data sources that damage the organization. We have the same percentage of goldbrickers in our ranks as is found in the general population. Perhaps it is worse in the large organizations because hiding out is easier.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

Gitcher SF5 Groups Right Here
Changing A Broken Science System
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face