Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Coaching For Success. Sure. | Main | Verastem's Chances »

March 12, 2012

The Brute Force Bias

Email This Entry

Posted by Derek

I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.

The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .

I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.

So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.

This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.

The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.

. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.

And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.

And that's how we find ourselves here:

. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.

That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?

Comments (64) + TrackBacks (0) | Category: Drug Development | Drug Industry History


COMMENTS

1. Rick Wobbe on March 12, 2012 8:27 AM writes...

Herein lies the advantage of the facile, beautifully non-falsifiable claim that we have "picked all the low hanging fruit". It allows us to deny that technology failed, it's just that the problems up and got tougher. Stupid misbehaving nature, doesn't it know we've mastered it?!

There is a case to be made that the easiest challenges have been addressed, though I haven't seen any objective measure of "easiness" other than the circular argument that the first problems solved are necessarily the easiest. But be that as it may, the creative new technologies introduced over the past 30 years were supposed to have addressed that. Where is the evidence that these technologies have risen to that challenge, evidence that can withstand the sobering indictment this paper suggests?

Permalink to Comment

2. Anonymous on March 12, 2012 8:47 AM writes...

Don't most if not all drug candidates coming from target-based programs need to pass phenotypic filters such as cellular assays and in vivo animal models too before they enter clinical trials?

Permalink to Comment

3. Curious Wavefunction on March 12, 2012 8:49 AM writes...

Cogent points. As I say in my post about the article, "As we constrain ourselves to accurate, narrowly defined features of biological systems, it deflects our attention from the less accurate but broader and more relevant features. The lesson here is simple; we are turning into the guy who looks for his keys under the street light only because it's easier to see there." Ditto for the whole deal about genomics-based drug discovery. We are increasingly falling into the trap of what we can do the most rationally and systematically, using the most cutting-edge techniques. And that's deflecting our attention from cruder, old-fashioned, cheaper but potentially more effective strategies (like classical pharmacology).

Permalink to Comment

4. startup on March 12, 2012 8:51 AM writes...

That "fundamental assumption" turned out to be wrong for genomics, didn it? Why should it be right elsewhere?

Permalink to Comment

5. imarx on March 12, 2012 8:59 AM writes...

Just curious - what is "iterative medicinal chemistry"? I haven't heard that term before.

Permalink to Comment

6. PPedroso on March 12, 2012 9:01 AM writes...

But my question is:

Did we forget about the empiric methods of 30 years ago?
We are still using them but on a later stage of R&D so I am afraid that part of the answer resides in the fact that it was easier to discover drugs 30 years ago because the easy ones had not yet been discovered.

Permalink to Comment

7. johnnyboy on March 12, 2012 9:21 AM writes...

What PPedroso said. Comparing today's R&D's productivity with that of 30-40 years does not make sense.The tools of 30-40 years are still in use today, we just have new ones that have been added.

Amid all those (undeniably interesting) arguments over the correct diagnosis (or post-mortem ?) for today's decreasing R&D returns, I'd like to see some actual proposals for a treatment. If we really went wrong in some significant way, how exactly are we supposed to correct this ? Doing less HTS ? Ignoring genomics ?

Permalink to Comment

8. bbooooooya on March 12, 2012 9:31 AM writes...

'what is "iterative medicinal chemistry"? I haven't heard that term before'

Sure you have, though I think the 'iterative' is usually pronounced as the 'k' in 'knife'.

Permalink to Comment

9. MTK on March 12, 2012 9:33 AM writes...

I won't really disagree with anything stated in the post or the comments, but I did find some things interesting.

a) "even in the 1980's, there were fears that the pace of drug discovery was slowing". If that is true than who's to say that the move toward target-based strategies hasn't slowed the decline of that pace? It is possible then that we'd be in an even worse spot now if empirical methods had been continued, right?

b) Wavefunction's comment about searching for the keys under the light, because it's the only place we can see is probably right. But isn't the solution then not looking in the dark, but rather to illuminate a greater area, which is presumably the whole idea of target-based discovery. Now we may be doing a pretty bad job at illumination, but that doesn't mean the idea is wrong, but perhaps rather the execution.

I only bring these up because I don't think it's just management that finds empirical methods unsatisfying, but also scientists. I like to have hypotheses and test those hypotheses. Crawling blindly on all fours in the dark really isn't fun.

Permalink to Comment

10. PPedroso on March 12, 2012 9:39 AM writes...

I have just read the article and they sorta seem to adress this question of mine so I may have to rephrase it.

Perhaps what was hanging low was not the drugs but the diseases. The easy ones were done in the past and now we have some difficult ones like degenerative CNS and cancer.
It is easy to have a hypercholesterolemia animal model and treat it with statins (that correlate well with clinical) but try to do that with Alzheimer?

Permalink to Comment

11. milkshake on March 12, 2012 9:47 AM writes...

@5: just a fancier name for trial-and-error. Approximate but repetitively self-refining computation routine that takes its own output as a new input for the next round of refinement is called iterative method

Management based delusion: HTS and target driven drug discovery was supposed to work, and it does when used correctly, but not at the expense of discarding animal-based phenotypic models. The compound screening funnels have been more often than not akin to garbage compactor - crude and arbitrary methods to whittle down numbers of compounds. While some criterions made sense many others (i.e. caco permeability, microsomal stability) are nearly worthless. But management likes to play the numbers game because it looks good on Powerpoint presentation - it seems to guarantee to the investors that if you stuff such and such number of compounds into the funnel you are guaranteed to generate 4 INDs a year. And of course we will not do unprofitable and narrow-indication drugs and focus on blockbusters instead and have a new Lipitor every year.

Permalink to Comment

12. CMCguy on March 12, 2012 9:59 AM writes...

Least we forget it was not only managers and investor that wanted the better control R&D as think most scientist wanted to avoid the graying of their own hair. Many paradigm shifts that ultimately were distractions/tools were thrust on R&D from external motivation however others came from inside or were ready accepted because recognized progress was slowing. The old empirical/iterative approach to drugs was (is) hard and often frustrating to execute that was mostly progress by lesson from failure or occasion unexpected observations. Although perhaps tempered with instances of definite mental contribution the old ways were still largely driven by brute force. I think you are dead on when talk about hubris were believed could rationally solve all problems with the new fundamental information or modern technology in hand and ended up be humbled.

Permalink to Comment

13. PUI orgo prof on March 12, 2012 10:19 AM writes...

Would the sulfa drugs pass the screens used today to be discovered?

I always tell my orgo class that it is interesting that the dye being tested was inactive invitro, but active in mice.

Why did Domagk go ahead with the mice model when there was no activity in cells?

Thanks in advance for your answers!

Permalink to Comment

14. PPedroso on March 12, 2012 10:28 AM writes...

@13,

yes sulfa would have been discovered because the HTS would have included the prodrug (inactive) and the metabolite (active), and the the latter as opposite to the prodrug would have been a hit and aftewards a lead! :)

Permalink to Comment

15. Anonymous on March 12, 2012 10:30 AM writes...

"At first sight, R&D was more efficient... when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated."

Yeah, go for it.

Seriously, the questions raised are vital but I'm not sure there's just so much *magic* in the well trod paths of yesteryear. Most have come to accept that animal-based screens are treacherously unpredictable. As for iterative medicinal chemistry, I'm not a good judge (...Biologist) but it would seem that Pharma has gotten hugely impatient with this one, too.

So if pharmacologic-reductionism & 'Omics-of-all-flavors have sucked up too much Drug Discovery dollars for too little success, does that mean we're deluded ? I don't think so. Look at the conceptual paths that have led to various cholesterol-lowering drugs, or various AIDS drugs, or Gleevec. I'm pretty certain that most of these have depended (to varying extents) upon "good old fashioned" molecular genetics, target modeling and binding affinity studies.

Permalink to Comment

16. HelicalZz on March 12, 2012 10:34 AM writes...

. .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans.

And so it often does. But it is not the only path. Again, improving a system by interfering with some basic function of it can only have occasional and limited success. It doesn't ever 'add features' which is what can often be called for.

Permalink to Comment

17. Rick Wobbe on March 12, 2012 10:36 AM writes...

johnnyboy, 7,
I agree with your point that we seem to say a lot about diagnosis of the problem but saying little about treatment beyond either: a, do more of the same hoping for a different outcome or b, do the opposite.

However, I wonder how correct it is to say "The tools of 30-40 years are still in use today, we just have new ones that have been added." Perhaps it's a matter of semantics, but it seems to me that older, less deterministic tools are more often used today to only confirm the findings or proposals of new technologies, not as a first-use tool to discover drug candidates. In cases of disagreement there's a danger of assuming that the older tool is defective (why else would we spend all that money on the new tool?!)

In that light, isn't the often-mentioned idea of reintroducing more phenotypic screening at the front end of the process, and using mechanism based assays and genomics as a follow-up to characterize, not eliminate, hits an example of a proposal to correct this? Chemical biologists often seem to take this approach, often finding interesting new pharmacologic or toxicologic processes that might not have been found any other way.

Permalink to Comment

18. pete on March 12, 2012 10:45 AM writes...

"At first sight, R&D was more efficient... when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated."

Yeah, go for it.

Seriously, the questions raised are vital but I'm not sure there's just so much *magic* in the well trod paths of yesteryear. Most have come to accept that animal-based screens are treacherously unpredictable. As for iterative medicinal chemistry, I'm not a good judge (...Biologist) but it would seem that Pharma has gotten hugely impatient with this one, too.

So if pharmacologic-reductionism & 'Omics-of-all-flavors have sucked up too much Drug Discovery dollars for too little success, does that mean we're deluded ? I don't think so. Look at the conceptual paths that have led to various cholesterol-lowering drugs, or various AIDS drugs, or Gleevec. I'm pretty certain that most of these have depended (to varying extents) upon "good old fashioned" molecular genetics, target modeling and binding affinity studies.

Permalink to Comment

19. Sam Weller on March 12, 2012 11:04 AM writes...

As a younger researcher in the field, I think it would be very helpful to know how drug discovery/development was like 30 years ago. Was it really so fundamentally different than what's done today, or is today's work quite the same, but only with the addition of the "rational" and "structure based" flavor?

Also, as much as we would like to be rational and methodological (it's comforting not only to MBAs, but also to any scientist), isn't much of what we're doing today still in the same category of the old empirical approach, sort of a Brownian motion in a large chemical space, and the rationality is only added in retrospect.

Permalink to Comment

20. Anonymous on March 12, 2012 11:17 AM writes...

@17

in order to test that hypothesis, a bad/incorrect use of good old tools, we would need to evaluate the number of molecules entering clinical stage and the attrition rate of these molecules.
I think that a) the attrition rates are higher now but b) the number of molecules entering clinical is higher also which means that we are discovering more when compared to older empiric ways but we are also better (or more severe) in rejecting those new discoveries.
Although I have data to support a), I am not sure about b). Nevertheless, I am pretty sure that we are rejecting some good candidates prior to the clinical in vivo phase. The question is how to detect this candidates without spending to much money or resources...

Permalink to Comment

21. Former MedChem on March 12, 2012 11:20 AM writes...

Sam, read "Chronicles of Drug Discovery Vol. 1", especially the chapter on Cimetidine by Robin Ganellin.

This was the program that gave us the first blockbuster drug. SK&F management tried to kill the program, reasoning that there was insufficient need for an indication treated surgically.

Permalink to Comment

22. Anonymous on March 12, 2012 12:03 PM writes...

@19

I think a big difference is the current lack of animal screens. For better or worse ADME issues were addressed in the initial screen, although I am not sure we totally appreciated it at the time.

There was also the possibility of an unexpected activity being picked up by an observant pharmacologist. It happen to me once with a morphine-like analgesic. You could explain it after the fact but that was not what we were shooting for.

Permalink to Comment

23. Rick Wobbe on March 12, 2012 12:04 PM writes...

Anonymous, 20,
I think that's a good way to look at it and I also believe the data are available a very rigorous analysis. The trick is being open to challenging current conventional wisdom without being biased.

On the other hand, I don't understand how one concludes that we're better or more severe in rejecting candidates based on the data you cite. The last time I looked at the data, the most significant spike in attrition over the past decade or two was at Phase II, suggesting that the preclinical mechanism and potency did not translate into clinical efficacy. Wouldn't it be an equally valid conclusion that we're submitting more, but lousier candidates? I realize that the former conclusion fits better with the narrative that the FDA is the problem, but I'm willing to challenge that in this case.

Permalink to Comment

24. emjeff on March 12, 2012 12:11 PM writes...

Nothing at all new about this; the late David Horrobin lamented the current state of biomedical research in a 2003 Nature Reviews Drug discovery piece. He advocated, among other things, a return to whole animal reasearch and away from the reductionist, single pathway viewpoint.

It strikes me that, if the majority of what we are doing now in biological research is not leading to new medicines, could it be that most of the research is not worth the paper it's printed on? We may be really barking up the wrong tree...

Permalink to Comment

25. barry on March 12, 2012 12:14 PM writes...

@19
thirty years ago, no one had HTS. A med. chem program would start with an "experiment of nature" usually a natural product or a chance finding. Companies had archives of compounds that had been made in earlier programs (minimum submission at Pfizer thirty years ago was two grams if memory serves!) but one would request that specific compounds be screened in a new project rather than apply a new assay to the whole set of compounds or to a "representative screening subset" of the archive.

Permalink to Comment

26. DLIB on March 12, 2012 12:18 PM writes...

How many out there have used a polygraph???That's old school.

Permalink to Comment

27. Greener grass? on March 12, 2012 12:18 PM writes...

What emerges from this discussion is that more information is bad!

Don't use information about targets, just do phenotypic screening and all will be well.

Baloney.

The problem is how to integrate all the data and then to truly capitalize on. If you can demonstrate that target X is central to disease Y, then either you conclude that you must hit target X or you don't believe the work done and want to look for a way to affect disease Y in a completely blind manner, i.e. the good old days.

By the way, this means not using many of the animal models currently at play because they leverage new pathway science. It will also mean having less data about compounds and about SAR and therefore fewer clues on what the next molecule is to make -unless it is a riff on known compounds (the good old days again).

The argument against HTS is not that you don't find chemical matter (well sometimes that is also true). The argument is that it does not end up going anywhere. That means you put too much work into the wrong start.

But, it does not suddenly mean that you found too many hits.

And, it does not mean that you have too much information!

The real question is how to best use that information. Conventional approaches put a small team of chemists on hits and a larger team on leads. Perhaps we should put more on hits and get a better read on progress-ability? That's hard though, lots of reasons you can't really get a good read much of the time.

Too often what happens is that the most potent hit is latched onto and the rest of the information is ignored.

The real challenge then is figuring out how couple what we do know with the key gaps in what we don't know.

Permalink to Comment

28. Hap on March 12, 2012 12:46 PM writes...

data ≠ information - bad or irrelevant data don't tell you anything useful, and using them may be worse than not having the data at all, because you think you know something and you don't. Since much of the problem seems to be spending money on worthless drugs (the 67% of R+D spent on development), the problem is at least in part that some of what we know is not correct. Animal models were imperfect (benzidines and bladder cancer), but were more reliable in knowing what something did in vivo.

Knowing what enzymes are targets and how they work is useful, but it may not be useful in finding drugs - you may not have enough of the puzzle to find what you need to pay back your investors. Perhaps validating the biology would be a better use of research money, rather than making drugs (or trying)?

Permalink to Comment

29. David Borhani on March 12, 2012 12:49 PM writes...

@13, re: Domagk testing Prontosil in mice despite its lack of activity against bacteria in culture:

See http://www.chemheritage.org/discover/online-resources/chemistry-in-history/themes/pharmaceuticals/preventing-and-treating-infectious-diseases/domagk.aspx

"In Domagk’s view a drug’s role was to interact with the immune system, either to strengthen it or so weaken the agent of infection that the immune system could easily conquer the invader. He therefore placed great stock in testing drugs in living systems and was prepared to continue working with a compound even after it failed testing on bacteria cultured in laboratory glassware (in vitro). Among the hundreds of chemical compounds prepared by Mietzsch and Klarer for Domagk to test were some related to the azo dyes. They had the characteristic -N=N- coupling of azo dyes, but one of the hydrogens attached to nitrogen had been replaced by a sulfonamide group. In 1931 the two chemists presented a compound (KL 695) that, although it proved inactive in vitro, was weakly active in laboratory mice infected with streptococcus. The chemists made substitutions in the structure of this molecule and, several months and 35 compounds later, produced KL 730, which showed incredible antibacterial effects on diseased laboratory mice. It was named prontosil rubrum and patented as Prontosil (Figure)."

Permalink to Comment

30. David Borhani on March 12, 2012 1:02 PM writes...

@13, why Domagk tested Prontosil in mice despite its inactivity in vitro. He apparently had a guiding hypothesis:

In Domagk’s view a drug’s role was to interact with the immune system, either to strengthen it or so weaken the agent of infection that the immune system could easily conquer the invader. He therefore placed great stock in testing drugs in living systems and was prepared to continue working with a compound even after it failed testing on bacteria cultured in laboratory glassware (in vitro). Among the hundreds of chemical compounds prepared by Mietzsch and Klarer for Domagk to test were some related to the azo dyes. They had the characteristic -N=N- coupling of azo dyes, but one of the hydrogens attached to nitrogen had been replaced by a sulfonamide group. In 1931 the two chemists presented a compound (KL 695) that, although it proved inactive in vitro, was weakly active in laboratory mice infected with streptococcus. The chemists made substitutions in the structure of this molecule and, several months and 35 compounds later, produced KL 730, which showed incredible antibacterial effects on diseased laboratory mice. It was named prontosil rubrum and patented as Prontosil (Figure).

www.chemheritage.org/discover/online-resources/chemistry-in-history/themes/pharmaceuticals/preventing-and-treating-infectious-diseases/domagk.aspx

Shows the value even a (somewhat) incorrect hypothesis!

Permalink to Comment

31. PPedroso on March 12, 2012 1:03 PM writes...

@28

how do you explain to investors that they need to inject tons of money into basic science (that will be available for the competitors) when what they want is an immediate return of their investment?

Even the Venture Capital firms that are supposedly more prone to risky investments want to get back their investments in a 6 years deadline...

The only possible investors are Countries but as you know, nowadays (at least in Europe, where I am right now) with all the sovereign credit crysis , no one is willing to spend money in something like that...

Permalink to Comment

32. Hap on March 12, 2012 1:15 PM writes...

Sorry - I was unclear. I thought that NIH/NSF/etc. should be doing/funding biological validation, rather than drug development, and not private investors.

Permalink to Comment

33. Curious Wavefunction on March 12, 2012 1:22 PM writes...

27: I don't think anyone is saying that phenotypic screening is the only game in town worth playing. The argument is that the pendulum has swung too much on the side of target-based discovery and it's now time to throw in a dash of old fashioned phenotypic and whole animal studies. Target-based analysis will always be valuable but the trick is to find the right case where it can work. There are of course cases like HIV protease and carbonic anhydrase where it worked really well and then there are cases like CNS drug discovery where it's not proven as useful. As the article indicated, the real question is how and at what stage of a project do you decide to emphasize one or the other approach.

Permalink to Comment

34. Clinicalpharmacogist on March 12, 2012 1:47 PM writes...

What we can say about the old days is that we produced drugs that worked (at least some of the time for some people). We gave ourselves stories about why this was but we never really knew why. The activity screens gave us a range of activities against a range of receptors but it was very rare that which of these was important was actually tested in man. We just assumed the highest affinity was the important one.

I suspect we got active drugs because the phenotypic screens helped us and we never really knew what was going on. Which may explain why our newer hyper-reductionist approaches are not as helpful as we had hoped.

Permalink to Comment

35. Count Karnstein on March 12, 2012 2:17 PM writes...

I'd argue that the nub of the problem is not that one specific approach has failed but that no other approaches have been tolerated at the same time.
From the late 1980's, Drug Discovery management moved steadily from "this is what we need, you work out the best way to deliver it" to "this is what we need and this is how you will deliver it". With this came an increasing expectation of adherence to the new paradigm or the risk of finding yourself labelled recalcitrant, unwilling to embrace new technologies or not being a team player. The poster child for this was Combinatorial Chemistry/HTS.
Subsequent R&D under-performance was met with a purging of such "heretics" and thus the beginning of another, even more blinkered, spiral of folly.

Sad that diversity of thought has become so challenging to most managers.

Permalink to Comment