Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Coaching For Success. Sure. | Main | Verastem's Chances »

March 12, 2012

The Brute Force Bias

Email This Entry

Posted by Derek

I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.

The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .

I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.

So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.

This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.

The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.

. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.

And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.

And that's how we find ourselves here:

. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.

That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?

Comments (64) + TrackBacks (0) | Category: Drug Development | Drug Industry History


COMMENTS

1. Rick Wobbe on March 12, 2012 8:27 AM writes...

Herein lies the advantage of the facile, beautifully non-falsifiable claim that we have "picked all the low hanging fruit". It allows us to deny that technology failed, it's just that the problems up and got tougher. Stupid misbehaving nature, doesn't it know we've mastered it?!

There is a case to be made that the easiest challenges have been addressed, though I haven't seen any objective measure of "easiness" other than the circular argument that the first problems solved are necessarily the easiest. But be that as it may, the creative new technologies introduced over the past 30 years were supposed to have addressed that. Where is the evidence that these technologies have risen to that challenge, evidence that can withstand the sobering indictment this paper suggests?

Permalink to Comment

2. Anonymous on March 12, 2012 8:47 AM writes...

Don't most if not all drug candidates coming from target-based programs need to pass phenotypic filters such as cellular assays and in vivo animal models too before they enter clinical trials?

Permalink to Comment

3. Curious Wavefunction on March 12, 2012 8:49 AM writes...

Cogent points. As I say in my post about the article, "As we constrain ourselves to accurate, narrowly defined features of biological systems, it deflects our attention from the less accurate but broader and more relevant features. The lesson here is simple; we are turning into the guy who looks for his keys under the street light only because it's easier to see there." Ditto for the whole deal about genomics-based drug discovery. We are increasingly falling into the trap of what we can do the most rationally and systematically, using the most cutting-edge techniques. And that's deflecting our attention from cruder, old-fashioned, cheaper but potentially more effective strategies (like classical pharmacology).

Permalink to Comment

4. startup on March 12, 2012 8:51 AM writes...

That "fundamental assumption" turned out to be wrong for genomics, didn it? Why should it be right elsewhere?

Permalink to Comment

5. imarx on March 12, 2012 8:59 AM writes...

Just curious - what is "iterative medicinal chemistry"? I haven't heard that term before.

Permalink to Comment

6. PPedroso on March 12, 2012 9:01 AM writes...

But my question is:

Did we forget about the empiric methods of 30 years ago?
We are still using them but on a later stage of R&D so I am afraid that part of the answer resides in the fact that it was easier to discover drugs 30 years ago because the easy ones had not yet been discovered.

Permalink to Comment

7. johnnyboy on March 12, 2012 9:21 AM writes...

What PPedroso said. Comparing today's R&D's productivity with that of 30-40 years does not make sense.The tools of 30-40 years are still in use today, we just have new ones that have been added.

Amid all those (undeniably interesting) arguments over the correct diagnosis (or post-mortem ?) for today's decreasing R&D returns, I'd like to see some actual proposals for a treatment. If we really went wrong in some significant way, how exactly are we supposed to correct this ? Doing less HTS ? Ignoring genomics ?

Permalink to Comment

8. bbooooooya on March 12, 2012 9:31 AM writes...

'what is "iterative medicinal chemistry"? I haven't heard that term before'

Sure you have, though I think the 'iterative' is usually pronounced as the 'k' in 'knife'.

Permalink to Comment

9. MTK on March 12, 2012 9:33 AM writes...

I won't really disagree with anything stated in the post or the comments, but I did find some things interesting.

a) "even in the 1980's, there were fears that the pace of drug discovery was slowing". If that is true than who's to say that the move toward target-based strategies hasn't slowed the decline of that pace? It is possible then that we'd be in an even worse spot now if empirical methods had been continued, right?

b) Wavefunction's comment about searching for the keys under the light, because it's the only place we can see is probably right. But isn't the solution then not looking in the dark, but rather to illuminate a greater area, which is presumably the whole idea of target-based discovery. Now we may be doing a pretty bad job at illumination, but that doesn't mean the idea is wrong, but perhaps rather the execution.

I only bring these up because I don't think it's just management that finds empirical methods unsatisfying, but also scientists. I like to have hypotheses and test those hypotheses. Crawling blindly on all fours in the dark really isn't fun.

Permalink to Comment

10. PPedroso on March 12, 2012 9:39 AM writes...

I have just read the article and they sorta seem to adress this question of mine so I may have to rephrase it.

Perhaps what was hanging low was not the drugs but the diseases. The easy ones were done in the past and now we have some difficult ones like degenerative CNS and cancer.
It is easy to have a hypercholesterolemia animal model and treat it with statins (that correlate well with clinical) but try to do that with Alzheimer?

Permalink to Comment

11. milkshake on March 12, 2012 9:47 AM writes...

@5: just a fancier name for trial-and-error. Approximate but repetitively self-refining computation routine that takes its own output as a new input for the next round of refinement is called iterative method

Management based delusion: HTS and target driven drug discovery was supposed to work, and it does when used correctly, but not at the expense of discarding animal-based phenotypic models. The compound screening funnels have been more often than not akin to garbage compactor - crude and arbitrary methods to whittle down numbers of compounds. While some criterions made sense many others (i.e. caco permeability, microsomal stability) are nearly worthless. But management likes to play the numbers game because it looks good on Powerpoint presentation - it seems to guarantee to the investors that if you stuff such and such number of compounds into the funnel you are guaranteed to generate 4 INDs a year. And of course we will not do unprofitable and narrow-indication drugs and focus on blockbusters instead and have a new Lipitor every year.

Permalink to Comment

12. CMCguy on March 12, 2012 9:59 AM writes...

Least we forget it was not only managers and investor that wanted the better control R&D as think most scientist wanted to avoid the graying of their own hair. Many paradigm shifts that ultimately were distractions/tools were thrust on R&D from external motivation however others came from inside or were ready accepted because recognized progress was slowing. The old empirical/iterative approach to drugs was (is) hard and often frustrating to execute that was mostly progress by lesson from failure or occasion unexpected observations. Although perhaps tempered with instances of definite mental contribution the old ways were still largely driven by brute force. I think you are dead on when talk about hubris were believed could rationally solve all problems with the new fundamental information or modern technology in hand and ended up be humbled.

Permalink to Comment

13. PUI orgo prof on March 12, 2012 10:19 AM writes...

Would the sulfa drugs pass the screens used today to be discovered?

I always tell my orgo class that it is interesting that the dye being tested was inactive invitro, but active in mice.

Why did Domagk go ahead with the mice model when there was no activity in cells?

Thanks in advance for your answers!

Permalink to Comment

14. PPedroso on March 12, 2012 10:28 AM writes...

@13,

yes sulfa would have been discovered because the HTS would have included the prodrug (inactive) and the metabolite (active), and the the latter as opposite to the prodrug would have been a hit and aftewards a lead! :)

Permalink to Comment

15. Anonymous on March 12, 2012 10:30 AM writes...

"At first sight, R&D was more efficient... when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated."

Yeah, go for it.

Seriously, the questions raised are vital but I'm not sure there's just so much *magic* in the well trod paths of yesteryear. Most have come to accept that animal-based screens are treacherously unpredictable. As for iterative medicinal chemistry, I'm not a good judge (...Biologist) but it would seem that Pharma has gotten hugely impatient with this one, too.

So if pharmacologic-reductionism & 'Omics-of-all-flavors have sucked up too much Drug Discovery dollars for too little success, does that mean we're deluded ? I don't think so. Look at the conceptual paths that have led to various cholesterol-lowering drugs, or various AIDS drugs, or Gleevec. I'm pretty certain that most of these have depended (to varying extents) upon "good old fashioned" molecular genetics, target modeling and binding affinity studies.

Permalink to Comment

16. HelicalZz on March 12, 2012 10:34 AM writes...

. .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans.

And so it often does. But it is not the only path. Again, improving a system by interfering with some basic function of it can only have occasional and limited success. It doesn't ever 'add features' which is what can often be called for.

Permalink to Comment

17. Rick Wobbe on March 12, 2012 10:36 AM writes...

johnnyboy, 7,
I agree with your point that we seem to say a lot about diagnosis of the problem but saying little about treatment beyond either: a, do more of the same hoping for a different outcome or b, do the opposite.

However, I wonder how correct it is to say "The tools of 30-40 years are still in use today, we just have new ones that have been added." Perhaps it's a matter of semantics, but it seems to me that older, less deterministic tools are more often used today to only confirm the findings or proposals of new technologies, not as a first-use tool to discover drug candidates. In cases of disagreement there's a danger of assuming that the older tool is defective (why else would we spend all that money on the new tool?!)

In that light, isn't the often-mentioned idea of reintroducing more phenotypic screening at the front end of the process, and using mechanism based assays and genomics as a follow-up to characterize, not eliminate, hits an example of a proposal to correct this? Chemical biologists often seem to take this approach, often finding interesting new pharmacologic or toxicologic processes that might not have been found any other way.

Permalink to Comment

18. pete on March 12, 2012 10:45 AM writes...

"At first sight, R&D was more efficient... when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated."

Yeah, go for it.

Seriously, the questions raised are vital but I'm not sure there's just so much *magic* in the well trod paths of yesteryear. Most have come to accept that animal-based screens are treacherously unpredictable. As for iterative medicinal chemistry, I'm not a good judge (...Biologist) but it would seem that Pharma has gotten hugely impatient with this one, too.

So if pharmacologic-reductionism & 'Omics-of-all-flavors have sucked up too much Drug Discovery dollars for too little success, does that mean we're deluded ? I don't think so. Look at the conceptual paths that have led to various cholesterol-lowering drugs, or various AIDS drugs, or Gleevec. I'm pretty certain that most of these have depended (to varying extents) upon "good old fashioned" molecular genetics, target modeling and binding affinity studies.

Permalink to Comment

19. Sam Weller on March 12, 2012 11:04 AM writes...

As a younger researcher in the field, I think it would be very helpful to know how drug discovery/development was like 30 years ago. Was it really so fundamentally different than what's done today, or is today's work quite the same, but only with the addition of the "rational" and "structure based" flavor?

Also, as much as we would like to be rational and methodological (it's comforting not only to MBAs, but also to any scientist), isn't much of what we're doing today still in the same category of the old empirical approach, sort of a Brownian motion in a large chemical space, and the rationality is only added in retrospect.

Permalink to Comment

20. Anonymous on March 12, 2012 11:17 AM writes...

@17

in order to test that hypothesis, a bad/incorrect use of good old tools, we would need to evaluate the number of molecules entering clinical stage and the attrition rate of these molecules.
I think that a) the attrition rates are higher now but b) the number of molecules entering clinical is higher also which means that we are discovering more when compared to older empiric ways but we are also better (or more severe) in rejecting those new discoveries.
Although I have data to support a), I am not sure about b). Nevertheless, I am pretty sure that we are rejecting some good candidates prior to the clinical in vivo phase. The question is how to detect this candidates without spending to much money or resources...

Permalink to Comment

21. Former MedChem on March 12, 2012 11:20 AM writes...

Sam, read "Chronicles of Drug Discovery Vol. 1", especially the chapter on Cimetidine by Robin Ganellin.

This was the program that gave us the first blockbuster drug. SK&F management tried to kill the program, reasoning that there was insufficient need for an indication treated surgically.

Permalink to Comment

22. Anonymous on March 12, 2012 12:03 PM writes...

@19

I think a big difference is the current lack of animal screens. For better or worse ADME issues were addressed in the initial screen, although I am not sure we totally appreciated it at the time.

There was also the possibility of an unexpected activity being picked up by an observant pharmacologist. It happen to me once with a morphine-like analgesic. You could explain it after the fact but that was not what we were shooting for.

Permalink to Comment

23. Rick Wobbe on March 12, 2012 12:04 PM writes...

Anonymous, 20,
I think that's a good way to look at it and I also believe the data are available a very rigorous analysis. The trick is being open to challenging current conventional wisdom without being biased.

On the other hand, I don't understand how one concludes that we're better or more severe in rejecting candidates based on the data you cite. The last time I looked at the data, the most significant spike in attrition over the past decade or two was at Phase II, suggesting that the preclinical mechanism and potency did not translate into clinical efficacy. Wouldn't it be an equally valid conclusion that we're submitting more, but lousier candidates? I realize that the former conclusion fits better with the narrative that the FDA is the problem, but I'm willing to challenge that in this case.

Permalink to Comment

24. emjeff on March 12, 2012 12:11 PM writes...

Nothing at all new about this; the late David Horrobin lamented the current state of biomedical research in a 2003 Nature Reviews Drug discovery piece. He advocated, among other things, a return to whole animal reasearch and away from the reductionist, single pathway viewpoint.

It strikes me that, if the majority of what we are doing now in biological research is not leading to new medicines, could it be that most of the research is not worth the paper it's printed on? We may be really barking up the wrong tree...

Permalink to Comment

25. barry on March 12, 2012 12:14 PM writes...

@19
thirty years ago, no one had HTS. A med. chem program would start with an "experiment of nature" usually a natural product or a chance finding. Companies had archives of compounds that had been made in earlier programs (minimum submission at Pfizer thirty years ago was two grams if memory serves!) but one would request that specific compounds be screened in a new project rather than apply a new assay to the whole set of compounds or to a "representative screening subset" of the archive.

Permalink to Comment

26. DLIB on March 12, 2012 12:18 PM writes...

How many out there have used a polygraph???That's old school.

Permalink to Comment

27. Greener grass? on March 12, 2012 12:18 PM writes...

What emerges from this discussion is that more information is bad!

Don't use information about targets, just do phenotypic screening and all will be well.

Baloney.

The problem is how to integrate all the data and then to truly capitalize on. If you can demonstrate that target X is central to disease Y, then either you conclude that you must hit target X or you don't believe the work done and want to look for a way to affect disease Y in a completely blind manner, i.e. the good old days.

By the way, this means not using many of the animal models currently at play because they leverage new pathway science. It will also mean having less data about compounds and about SAR and therefore fewer clues on what the next molecule is to make -unless it is a riff on known compounds (the good old days again).

The argument against HTS is not that you don't find chemical matter (well sometimes that is also true). The argument is that it does not end up going anywhere. That means you put too much work into the wrong start.

But, it does not suddenly mean that you found too many hits.

And, it does not mean that you have too much information!

The real question is how to best use that information. Conventional approaches put a small team of chemists on hits and a larger team on leads. Perhaps we should put more on hits and get a better read on progress-ability? That's hard though, lots of reasons you can't really get a good read much of the time.

Too often what happens is that the most potent hit is latched onto and the rest of the information is ignored.

The real challenge then is figuring out how couple what we do know with the key gaps in what we don't know.

Permalink to Comment

28. Hap on March 12, 2012 12:46 PM writes...

data ≠ information - bad or irrelevant data don't tell you anything useful, and using them may be worse than not having the data at all, because you think you know something and you don't. Since much of the problem seems to be spending money on worthless drugs (the 67% of R+D spent on development), the problem is at least in part that some of what we know is not correct. Animal models were imperfect (benzidines and bladder cancer), but were more reliable in knowing what something did in vivo.

Knowing what enzymes are targets and how they work is useful, but it may not be useful in finding drugs - you may not have enough of the puzzle to find what you need to pay back your investors. Perhaps validating the biology would be a better use of research money, rather than making drugs (or trying)?

Permalink to Comment

29. David Borhani on March 12, 2012 12:49 PM writes...

@13, re: Domagk testing Prontosil in mice despite its lack of activity against bacteria in culture:

See http://www.chemheritage.org/discover/online-resources/chemistry-in-history/themes/pharmaceuticals/preventing-and-treating-infectious-diseases/domagk.aspx

"In Domagk’s view a drug’s role was to interact with the immune system, either to strengthen it or so weaken the agent of infection that the immune system could easily conquer the invader. He therefore placed great stock in testing drugs in living systems and was prepared to continue working with a compound even after it failed testing on bacteria cultured in laboratory glassware (in vitro). Among the hundreds of chemical compounds prepared by Mietzsch and Klarer for Domagk to test were some related to the azo dyes. They had the characteristic -N=N- coupling of azo dyes, but one of the hydrogens attached to nitrogen had been replaced by a sulfonamide group. In 1931 the two chemists presented a compound (KL 695) that, although it proved inactive in vitro, was weakly active in laboratory mice infected with streptococcus. The chemists made substitutions in the structure of this molecule and, several months and 35 compounds later, produced KL 730, which showed incredible antibacterial effects on diseased laboratory mice. It was named prontosil rubrum and patented as Prontosil (Figure)."

Permalink to Comment

30. David Borhani on March 12, 2012 1:02 PM writes...

@13, why Domagk tested Prontosil in mice despite its inactivity in vitro. He apparently had a guiding hypothesis:

In Domagk’s view a drug’s role was to interact with the immune system, either to strengthen it or so weaken the agent of infection that the immune system could easily conquer the invader. He therefore placed great stock in testing drugs in living systems and was prepared to continue working with a compound even after it failed testing on bacteria cultured in laboratory glassware (in vitro). Among the hundreds of chemical compounds prepared by Mietzsch and Klarer for Domagk to test were some related to the azo dyes. They had the characteristic -N=N- coupling of azo dyes, but one of the hydrogens attached to nitrogen had been replaced by a sulfonamide group. In 1931 the two chemists presented a compound (KL 695) that, although it proved inactive in vitro, was weakly active in laboratory mice infected with streptococcus. The chemists made substitutions in the structure of this molecule and, several months and 35 compounds later, produced KL 730, which showed incredible antibacterial effects on diseased laboratory mice. It was named prontosil rubrum and patented as Prontosil (Figure).

www.chemheritage.org/discover/online-resources/chemistry-in-history/themes/pharmaceuticals/preventing-and-treating-infectious-diseases/domagk.aspx

Shows the value even a (somewhat) incorrect hypothesis!

Permalink to Comment

31. PPedroso on March 12, 2012 1:03 PM writes...

@28

how do you explain to investors that they need to inject tons of money into basic science (that will be available for the competitors) when what they want is an immediate return of their investment?

Even the Venture Capital firms that are supposedly more prone to risky investments want to get back their investments in a 6 years deadline...

The only possible investors are Countries but as you know, nowadays (at least in Europe, where I am right now) with all the sovereign credit crysis , no one is willing to spend money in something like that...

Permalink to Comment

32. Hap on March 12, 2012 1:15 PM writes...

Sorry - I was unclear. I thought that NIH/NSF/etc. should be doing/funding biological validation, rather than drug development, and not private investors.

Permalink to Comment

33. Curious Wavefunction on March 12, 2012 1:22 PM writes...

27: I don't think anyone is saying that phenotypic screening is the only game in town worth playing. The argument is that the pendulum has swung too much on the side of target-based discovery and it's now time to throw in a dash of old fashioned phenotypic and whole animal studies. Target-based analysis will always be valuable but the trick is to find the right case where it can work. There are of course cases like HIV protease and carbonic anhydrase where it worked really well and then there are cases like CNS drug discovery where it's not proven as useful. As the article indicated, the real question is how and at what stage of a project do you decide to emphasize one or the other approach.

Permalink to Comment

34. Clinicalpharmacogist on March 12, 2012 1:47 PM writes...

What we can say about the old days is that we produced drugs that worked (at least some of the time for some people). We gave ourselves stories about why this was but we never really knew why. The activity screens gave us a range of activities against a range of receptors but it was very rare that which of these was important was actually tested in man. We just assumed the highest affinity was the important one.

I suspect we got active drugs because the phenotypic screens helped us and we never really knew what was going on. Which may explain why our newer hyper-reductionist approaches are not as helpful as we had hoped.

Permalink to Comment

35. Count Karnstein on March 12, 2012 2:17 PM writes...

I'd argue that the nub of the problem is not that one specific approach has failed but that no other approaches have been tolerated at the same time.
From the late 1980's, Drug Discovery management moved steadily from "this is what we need, you work out the best way to deliver it" to "this is what we need and this is how you will deliver it". With this came an increasing expectation of adherence to the new paradigm or the risk of finding yourself labelled recalcitrant, unwilling to embrace new technologies or not being a team player. The poster child for this was Combinatorial Chemistry/HTS.
Subsequent R&D under-performance was met with a purging of such "heretics" and thus the beginning of another, even more blinkered, spiral of folly.

Sad that diversity of thought has become so challenging to most managers.

Permalink to Comment

36. newnickname on March 12, 2012 2:40 PM writes...

Cell pathologist Gerald B Dermer has long argued that cell-based cancer research is a losing proposition. See his book The Immortal Cell (not to be confused with the life extension book with the same title by Michael D West of Advanced Cell Technologies in Marlboro).

Permalink to Comment

37. r.al on March 12, 2012 2:59 PM writes...

Comment 24: It strikes me that, if the majority of what we are doing now in biological research is not leading to new medicines, could it be that most of the research is not worth the paper it's printed on? We may be really barking up the wrong tree...

I believe this to be accurate. Our genome is 90% virus and bacterial. 7 % Junk DNA and 3% human genome. Together they comprise the metagenome.

In drug discovery we are ignoring 90% of bacteria and viruses that will elicit a reaction from a drug that is synthetic and also the viruses and bacteria that inhabit us vary vastly from person to person and different in different environment and also vary with seasons

We need a better way other than One disease, one receptor and therefore one drug. .Thus each person who takes the same drug will have a different response,
This in my opinion is why r & D has not been successful, as we are targeting 5% of the metagenome the chances of a drug working in person is small most drugs have NNT's of over 20 ( 1 in 5) of course they also have side effects
WE need a different path

Permalink to Comment

38. Maks on March 12, 2012 3:09 PM writes...

Two questions:
1) Where all those numbers for approved drugs checked for drugs which were eventually withdrawn? 2) Perhaps there is nothing wrong with all the techniques used now? but they are just used to prioritize and eliminate the wrong compounds.

Permalink to Comment

39. anonie on March 12, 2012 3:20 PM writes...

Most scientific disciplines, particularly chemistry, strive for reductionism, simplicity. What many don't want to acknowledge is that our understanding of whole-organism biology can often appear messy, not conforming to simplified dogma, hypotheses, proposals. During my career, it's been both amusing and sad to see chemists become bewildered when the practice (eg data, results) did not match up with the latest, greatest hypothesis which often was driven by some type of structure based approach.

Permalink to Comment

40. Biotechtranslated on March 12, 2012 3:29 PM writes...

Sometimes it just comes down to dumb luck, right?

Wasn't Lyrica developed as a GABAase inhibitor? It had the right binding affinity and worked as expected when introduced into animals and then humans.

It was only after it was already on the market that researchers realized it's MOA was actually potentiation of the glutamic acid decarboxylase enzyme and its binding to the alpha2delta Ca channel subunit. But ask yourself, did it really matter?

Mike

Permalink to Comment

41. Pamplemousse on March 12, 2012 3:37 PM writes...

"For every complex problem, there is a solution which is simple, neat, and wrong." H.L. Mencken

That was just sort of echoing in my head as I read the last paragraph.

Permalink to Comment

42. JK on March 12, 2012 3:44 PM writes...

I don't pretend to have answers, but this doesn't seem completely plausible.

I think there has been a lot of "and" rather than "instead". Very crude, I know, but look at hits for 'natural product' in pubmed: years 1980-81: 24, years 1990-91: 97, years 2000-01: 432, years 2010-11: 1587. Even accounting for depressing rise in garbage and a suboptimal search term it doesn't paint a picture of a field in decline.

I don't have a good feel for how chemistry has changed in thirty years, so perhaps the chemists could chip in. But isn't it the case that new techniques must have made what iterative medicinal chemistry is done much more efficient? I'm thinking of basics like HPLC, NMR, spectroscopy, new reactions and commercially available starting materials. Haven't all those things made a medicinal chemist more productive? The counter-force has to be pretty strong.

Not all the advances have moved away from in vivo, either. If moving to in vivo human testing was a master solution then wouldn't PET have made a bigger impact in drug development? Are genetically engineering animals really worse rather than better models? Are recombinant P450 assays, which should be better than animals really junk?

Permalink to Comment

43. TJMC on March 12, 2012 3:54 PM writes...

I have seen many "Brute Force" solutions come, inflame expectations and yet ultimately underwhelm with results. Having been on both the receiving and proposing ends of many, I see one thing many "failures" have in common - they were designed and implemented without considering the overall process they were in. Nor was the end result of better medicines that society would pay for, a primary consideration (let alone driver.)

For instance: the authors cite a lack of improving "S/N of clinical trials" - was that the direct goal? Or indirect for something else that was not widely shared or accepted? An increasing number of "failures" in PIII are because it becomes apparent they have scant or negative profit prospects, not for clear safety or efficacy reasons. How could a point solution like combinatorial chemistry or improved target-based tools anticipate those (paying society) dimensions? The authors acknowledge that the underlying reasons for clinical failure are not clearly documented to outside observers and may vary widely company to company, decade to decade.

Another thought - In the 1980's the shift from phenotypic to target-based was partly to address the root causes of disease, and not just their symptoms. Was the resulting brute force bias wrong? (comments above give great arguments on both sides.) Or has BF rationalism just over-promised in time and costs to deliver? Is there truly another more promising way? Could "phenotypic reductionism" deliver understanding of disease root causes? Or is it a case of "you can't get there from here” and both BF and PR are needed (as others so eloquently state above)?

Permalink to Comment

44. Pamplemousse on March 12, 2012 4:22 PM writes...

Just a question though, how much does (inefficient) basic research actually matter?

Clinical trials are the main driver of cost, and the number of patients doubled between 1980 and 1992 (link, "Trends" half way down) . I don't see how the price per drug couldn't jump like Rick Wobbe showed last post (fig 1), and that's over half the battle right there.

-Sorry, forgot to fill in the name box.

Permalink to Comment

45. Rick Wobbe on March 13, 2012 7:38 AM writes...

Maks, 37,
Q 1 in your post prompted me to look into the numbers of drug withdrawals since the 60s. It's a lot harder than I expected because one needs to spend a lot of time deep in the bowels various govt records, which makes it hard to be sure you've covered everything. Having said that, my preliminary review of all the safety related withdrawals I could find suggests that the number of withdrawals has generally risen modestly over time. But when you take into consideration the exponentially decreasing number of new approvals, the ratio of withdrawals per approved drugs has risen dramatically since the 90s. If that's true, then there's a case to be made that not only has the number of drugs per R&D dollar dropped, so has the safety of those drugs. Of course, one could argue that that's the result of stricter FDA oversight, but that would require more in-depth analysis of the data.

This seems like a very researchable question and the data are definitely out there for anyone with the patience to do a more thorough job than I. If there isn't already a published report on this, there ought to be...

Permalink to Comment

46. passionlessDrone on March 13, 2012 7:53 AM writes...

Hello friends -

Is there any chance that some of the recent decline in accepted drugs is the result of technical improvements on the other end; i.e., we are getting much better at spotting problems in phase III, and drugs that would have gotten through thirty years ago now fail in large scale human trials?

- pD

Permalink to Comment

47. Anonymous on March 13, 2012 8:08 AM writes...

@pD # 44, if the increased rate of market withdrawals for safety reasons is any indication, then the reality may actually be the opposite: less safe drugs are emerging from trials.

Permalink to Comment

48. TJMC on March 13, 2012 8:47 AM writes...

My observations on the "are we launching safer drugs?" situation:

I believe that newer techniques are successful in identifying AE issues earlier, and at a higher degree of sensitivity. This results in "safer" but fewer drugs than in the "good old days".

On the other hand process, regs and tech have improved record keeping post-approval, with improvements in sensitivity, and what one could call "connect the dots" causation insights as well as "longer memories". In short, compared to 10 years ago, we have some real metrics post-approval that drive recalls.

Permalink to Comment

49. Rick Wobbe on March 13, 2012 9:28 AM writes...

TJMC 46,
Unless the FDA is selectively overlooking older drugs in its ongoing post-approval surveillance or I've missed something big time, the withdrawal rates over time that I'm seeing suggest that more recently marketed drugs are, at best, no safer than older drugs. Someone better qualified and resourced than I am should really look at this more closely.

Permalink to Comment

50. TJMC on March 13, 2012 10:22 AM writes...

Rick - That selectivity is not the intent of the FDA - but rather the result of what comes naturally to the "radar" I described that has improved causality insights and memery aspects.

Approved drugs have long post-approval requirements that constitute part of that "bias". Also - folks are not noting their reactions to aspirin or other OTC/oldies that have been supplanted by newer drugs that were approved because of better efficacy. Until we have consolidated and comprehensive eHeath records of ALL drugs folks take, that apparent bias will continue.

Permalink to Comment

51. newnickname on March 13, 2012 10:41 AM writes...

@43 Wobbe, 46 TJMC and 47: I suggest you look at direct to consumer advertising (post-1997) and faux-advertising (such as the 1981 Oraflex press kit) as possible cut-offs for looking for trends in drug withdrawals.

I use two examples (only), but Oraflex was way overprescribed in 1981, there were several deaths and it was withdrawn. Similar drugs (Feldene) are still on the market. In an effort to expand the market, Vioxx testing exposed toxicity that led to its withdrawal. Celebrex and other similars are still on the market. If Rxs had remained targeted to the correct patients, I think they would both still be on the market.

Once drugs are off patent, they are promoted less, ill advised off-label uses probably decline (not being promoted by the Companies) so the likelihood of problems showing up, except in the most needful populations, probably go down.

Permalink to Comment

52. Rick Wobbe on March 13, 2012 11:22 AM writes...

TJMC and new nickname, 48 & 49,
I take your point that there could be potential sources of apparent, unintentional, bias. Unfortunately, I have a hard time seeing evidence of ACTUAL bias. It reminds me of Carl Sagan's comment that lack of evidence is not evidence of lack. However, the fact that many very old drugs are included even among recent withdrawals indicates that such potential biases are not absolute, so I suspect we're talking about a matter of degree of bias, rather than its presence or absence. How would one get evidence on the degree of this bias for older vs. newer drugs? This is way beyond my abilities.

Permalink to Comment

53. Student on March 13, 2012 12:17 PM writes...

While there are many experts among us, we are all looking from the outside in. It would be great if someone would took a few drug candidates as examples (failed and approved) and compiled interviews with researchers who were working on them. Because of pharma culture, people can rarely speak up If those throughts were organized the shortfall might be a bit more apparent. Maybe Bill Smith, wanted animal models but there wasn't money, or maybe he was pushed to make something get through the next stage despite bad data. Some of us are arguing as though each researcher was given access to the same lab resources, animal facilities, financing, etc. I hate to beat a dead horse, but it could very well be management's fault for giving people a lemon squeezer to crack walnuts.

Permalink to Comment

54. Pamplemousse on March 13, 2012 1:11 PM writes...

Dammit. All that and I messed up the html code.

Link on clinical trial size:

http://web.archive.org/web/20010707070025/http://nii.nist.gov/pubs/coc_rd/apdx_phar.html

Permalink to Comment

55. Auntie pathy on March 13, 2012 1:55 PM writes...

@39: "During my career, it's been both amusing and sad to see chemists become bewildered when the practice (eg data, results) did not match up with the latest, greatest hypothesis which often was driven by some type of structure based approach"

Sadly this can be true. The problem isn't the structure-based approach but the minority of researchers in all fields who lack logical reasoning, plus a more widespread belief of anything in print.

For example, it's natural for structural biologists to try to rationalize everything by structure, so if compound 1 which does X binds in mode A, and compound 2 which does Y binds in mode B, then to them A causes X. But it's not just the structure based approach: the same lazy logic is applied to enthalpy vs entropy by thermodynamicists, and fast and slow binding kinetics by kineticists.

Another example: several posts on here and some of my colleagues who work on phenotypic assays seem to think it would be better to screen their assay first and then put the hits through a binding assay. I see no reason why that's any better or worse than doing the same thing in reverse order, you still end up at the same place, except one way will be cheaper.

Permalink to Comment

56. MikeC on March 13, 2012 6:28 PM writes...

@55: "several posts on here and some of my colleagues who work on phenotypic assays seem to think it would be better to screen their assay first and then put the hits through a binding assay. I see no reason why that's any better or worse than doing the same thing in reverse order, you still end up at the same place, except one way will be cheaper."

Well, the first way can give you hits that may be viable leads even if they don't bind to the target of your choice.
The second way is cheaper.

Permalink to Comment

57. dvizard on March 13, 2012 8:45 PM writes...

"biology itself has been on a reductionist binge for a long time now. The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver."

At least in the more recent past, with the rise of systems biology, I'm not sure whether I agree with you. Biologists have at least started to recognize the limits of the hardcore reductionist approach.

Permalink to Comment

58. Cellbio on March 13, 2012 8:58 PM writes...

....and phenotypic screening can give you leads that don't bind to purified recombinant proteins, leads that would not make it to a phenotypic screen if the biochemical screen is the entry point. This is why it is a good compliment to target based screens.

The converse approach, if done right, has great value too, as you can learn whether or not the "target selective" compounds are biologically selective, maybe illustrating that compounds with identical biochemical profiles exhibit non-identical phenotypic profiles. Done poorly, that is without off-target biology being screened across a series, one does not see the diversity of biology present in a series of highly similar compounds, so selection of leads for tox and subsequent clinical exploration remains ignorant of this diversity until failure. Pretty bad way to make selections for investment of millions of dollars, but that is the target centric approach in most programs, namely, screen on an enzyme, pass biochemical selectivity, confirm retention of desired biology in cells and animals, then probe breadth of pharmacology in tox. I would never, in the universe I will run some day, approve scale-up until phenotypic screens say the biological impact is reasonable (note, not totally explainable), and the team has a sense of the pharmacological variation of a series. Would a next in line compound be similar except better ADME, or is it also a likely to be varying in biological impact?

Permalink to Comment

59. exGlaxoid on March 14, 2012 9:51 AM writes...

I see a few trends in pharmaceuticals.

1) There is a MUCH higher safety standard for newer drugs than older ones. Aspirin, acetominophen, and penicillin drugs would never be allowed now. Because they are already generic, widely used and very effective in some cases, they are still allowed. But new drugs with their profiles would never be allowed. Thus we cannot get a first generation drug on the market easily now, which would then allow us to improve it to a better one, once we see what issues exist. Asipirin and Tylenol kill FAR more people each year that Vioxx and Avandia combined in their entire lifespans.

Most of the best drugs on the market now are 3rd or 4th generation drugs, that benefited from the first generation drugs being testing in millions of people and then improved. Benedryl lead to Seldene lead to Claritin and Allegra. Tagamet led to Zantac and Pepcid. Ibuprofen lead to naproxen and then Vioxx and Celebrex. Penicillin lead to cepholosporins and many others. The list is huge, but if the first generation never sees the light of day, then there is no good way to improve the drug based on real, human data.

So if we don't allow new drugs to make it to market due to not being "perfect", then we will never be allowed to improve on them, as few companies will spend money working on a compound class that the FDA has already not allowed. I have seen many very promising drugs that never got even close to trials because of the fear of working in an area that had no history of success.

Younger people may not know it, but the Med. Chem. iterative cycle used to routinely include human testing very early. Look at the history of Benedryl and Chlorpheniramine for examples, the first 10-20 compounds were tested in prisoners and then the least drowy, most effective compounds were modified, and with weeks, new compounds were being retested until the best compounds were found, with the chemical and biological tools available at that time. No NMR, HPLC-MS, radiolabelling, or DNA tests were involved. So if it was decided to allow simpler, faster clinical trials, we could find new drugs quite easily and optimize their effects quite easily. That is what will likely happen in the future, probably in less developed countries, where drug discovery is moving.

Permalink to Comment

60. Vader on March 14, 2012 10:51 AM writes...

Serendipity is a huge factor in almost all really important scientific discoveries. Unfortunately, it's very hard to work into the business plan or the grant proposal. So you pretend brute force is a way to achieve serendipity by design.

Stupid.

Permalink to Comment

61. Cellbio on March 14, 2012 10:58 AM writes...

exG,

Agree with your post. This lack of meaningful human testing limits the refinement of biomarkers and screening methods as well. In place of clinical endpoints in humans, we rely on target validation, screening checklists and if we are lucky to get to phase 1, a biomarker or surrogate PD marker for a peak at efficacy. Only when these are anchored with true efficacy measures can one then tweak the program and truly refine approaches and molecule choices. I only saw this happen once. The other programs cycled around the screening, lead selection, safety assessment candidate selection, safety assessment, occasional Ph1, back to selection of next clinical lead. Careers at pharma/big biotech are made in this cycle, teams are productive as measured by internal metrics, cash is consumed, and the game goes on until financial pressure crashes the system.

Permalink to Comment

62. MikeC on March 14, 2012 3:52 PM writes...

@59: "I see a few trends in pharmaceuticals.

1)There is a MUCH higher safety standard for newer drugs than older ones. Aspirin, acetominophen, and penicillin drugs would never be allowed now."

I've read there's another trend in the works of conditioning a new drug's approval on the use of an accompanying diagnostic test that addresses whichever is the biggest concern (safe/not safe for this subpopulation?, is/isn't effective for that subpopulation?) for the FDA. If you can rule out most of the people who shouldn't be getting your drug you can generate much better trial data to bring to the FDA ... and reap a correspondingly smaller market share after approval. You also face approvals for new conditions and formulations that are burdened by the diagnostic, unless they proved it wasn't necessary.

I remember Derek bringing up this particular double edged sword back during the heady days of Sequencing The Human Genome Will Change Everything.

Permalink to Comment

63. Auntie Pathy on March 15, 2012 3:19 AM writes...

@58: "phenotypic screening can give you leads that don't bind to purified recombinant proteins, leads that would not make it to a phenotypic screen if the biochemical screen is the entry point"

This is the point I was trying to make: it's only worth going down this route if you do the hard time-consuming chemoproteomics to work out what you're binding to, or if you scrap your recombinant protein assay altogether and work with cellular SAR to follow up such hits.

But this brings in other problems: interpreting the SAR of a series with >1 mode of action, possibly not even binding to proteins at all, where cell permeability complicates the interpretation even more, is a real challenge. So is designing experiments to test the safety of your molecules if you don't have a clue what to look for. I'm not saying it hasn't been done before, just that it sounds really hard to me.

I'm afraid I think the bigger problem, that neither approach really helps with, is the knowledge gap between cell assays and the disease. I've worked on more targets than I care to remember. Only on a handful did we fail to quickly go from a recombinant assay to something that did what we thought we wanted in cells, but not many of those went on to work in humans.

Permalink to Comment

64. Rick Wobbe on March 15, 2012 9:48 AM writes...

Auntie, 63,
Perhaps I'm over-interpreting your use of the term "time-consuming" with reference to chemoproteomics, but I think you've overestimated the size of the problem. The quality of the tools we have today, from structural biology to site-directed mutagenesis to siRNA to SAGE to..., make narrowing down potential mechanisms faster and much less arduous than it would've been even 5 years ago, especially when you have some preliminary SAR from a cell-based assay, which should track alongside the chemoproteomic results if you're on the right track. You still have the important problem of figuring out how to use the putative mechanistic information once you have it, but I think history is telling us that was a problem with the mechanism-based-assay-first approach too. If all other problems (cell permeability, cell-to-whole body knowledge gap, the safety issues) are roughly equal, it seems wisest to screen the widest possible amount of mechanism space against the widest possible chemical diversity space early on, using the most mechanistically-agnostic screen you can trust, then probe chemical hit space with more reductionist methods later.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Adoptive T-Cell Therapy for Cancer: The Short Version
How Much Is Wrong?
The 2013 Drug Approvals: Not So Great?
Positive Rules and Negative Ones
Prices Rising - Every Year, Every Drug?
Easy Aziridines
Back Blogging (Bonus Biographical Begging)
It Just So Happens That I Have A Conference Right Here