Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Reaction to Andy Grove's Clinical Trial Proposals | Main | SciFinder Access For the Unemployed »

January 6, 2012

Do We Believe These Things, Or Not?

Email This Entry

Posted by Derek

Some of the discussions that come up here around clinical attrition rates and compound properties prompts me to see how much we can agree on. So, are these propositions controversial, or not?

1. Too many drugs fail in clinical trials. We are having a great deal of trouble going on with these failure rates, given the expense involved.

2. A significant number of these failures are due to lack of efficacy - either none at all, or not enough.

2a. Fixing efficacy failures is hard, since it seems to require deeper knowledge, case-by-case, of disease mechanisms. As it stands, we get a significant amount of this knowledge from our drug failures themselves.

2b. Better target selection without such detailed knowledge is hard to come by. Good phenotypic assays are perhaps the only shortcut, but a good phenotypic assays are not easy to develop and validate.

3. Outside of efficacy, a significant number of clinical failures are also due to side effects/toxicity. These two factors (efficacy and tox) account for the great majority of compounds that drop out of the clinic.

3a. Fixing tox/side effect failures through detailed knowledge is perhaps hardest of all, since there are a huge number of possible mechanisms. There are far more ways for things to go wrong than there are for them to work correctly.

3b. But there are broad correlations between molecular structures and properties and the likelihood of toxicity. While not infallible, these correlations are strong enough to be useful, and we should be grateful for anything we can get that might diminish the possibility of later failure.

Example of such structural features are redox-active groups like nitros and quinones, which really are associated with trouble - not invariably, but enough to make you very cautious. More broadly, high logP values are also associated with trouble in development - not as strongly, but strong enough to be worth considering.

So, is everyone pretty much in agreement with these things? What I'm saying is that if you take a hundred aryl nitro compounds into development, versus a hundred that don't have such a group, the latter cohort of compounds will surely have a higher success rate. And if you take a hundred compounds with logP values of 1 to 3 into development, these will have a higher success rate than a hundred compounds, against the same targets, with logP of 4 to 6. Do we believe this, or not?

Comments (34) + TrackBacks (0) | Category: Drug Assays | Drug Development | Toxicology


COMMENTS

1. dearieme on January 6, 2012 10:36 AM writes...

A second edition has come out of Le Fanu's The Rise and Fall of Modern Medicine. That'll really cheer up those of your readers unfamiliar with the first.

Permalink to Comment

2. anchor on January 6, 2012 10:41 AM writes...

Derek: At this point in time nothing is believable. Just keep at it and for a given therapeutic area, if you are able to show marginal efficacy, dose advantage with no adverse effects, then you are in. I mean there is so many factors riding on clinical trials that I have stopped speculating on it! I would rather play "ladder and chute". There are all kinds of drug that are on the market that falls outside the "perimeter" you just described. Science can only be explained to a point and afterward it is a waiting game until the trials are done!

Permalink to Comment

3. Ed on January 6, 2012 10:44 AM writes...

I think you have to discriminate between on-target and off-target tox. On-target tox seems to be something spoken about which we can do very little to control, other than hoping that a) there is an acceptable window of dosing that we can access and b) that we actually manage to find suggestions of what that dose might be in Phase 2 trials.

A lot of these issues are skirting around the edges of what we currently understand - e.g. tofacitinib has issues with elevated HDL and LDL and JAK2 inhibition has been shown to cause thrombocytopenia. Despite all the work that pre-clinical people put in, perhaps it comes down to blind luck that Pfizer chose a compound that was both safe and effective. It could easily be the case that a compound that has better looking data could have shown a worse profile in the clinic.

Permalink to Comment

4. John Wayne on January 6, 2012 11:02 AM writes...

"What I'm saying is that if you take a hundred aryl nitro compounds into development, versus a hundred that don't have such a group, the latter cohort of compounds will surely have a higher success rate. And if you take a hundred compounds with logP values of 1 to 3 into development, these will have a higher success rate than a hundred compounds, against the same targets, with logP of 4 to 6. Do we believe this, or not?"

I believe these things, but I don't think that that belief is practical. Nobody ever has enough leads or potential clinical candidates to filter in this manner. Yes, we can prioritize these properties (LogD, problematic functional groups, etc.) low in lead selection, SAR, and clinical candidates; and most of us do (I hope) - but we never seem to have enough choices.

Permalink to Comment

5. marcello on January 6, 2012 11:06 AM writes...

One way of reducing attrition might be really genetic profiling of patients (or whatever they call personalized medicine) and then having tailored drugs throughout trial and then treatment. However I am not sure how economically viable would be such model with the current approval times and hurdles...?

Permalink to Comment

6. johnnyboy on January 6, 2012 11:16 AM writes...

I can't comment on the chemistry point(3b), but apart from that I can't see how one could disagree with your other points, which don't seem that controversial to me.

1. "Too many" is always subjective, but I think we can agree that the failure rate at Phase 3 is higher than it should be, considering the cost of getting to that point.

2. Yes, failures are due either to insufficient efficacy, or to a unfavorable safety profile. Considering that a lot of drugs that get into Phase 3 have been developed based on scientific data that dates back at least 10 years previous to the trial, it is perhaps not that surprising. In the years it takes between discovery and phase 3, a lot of new data can come to light, but once the development machine has been put into the race, it can be hard to stop it or steer it differently. Another aspect of the efficacy problem that seems important is the large impact of the placebo effect, which appears to be growing larger and larger, for reasons poorly understood.

3. Apart from safety or efficacy, I don't see what else could lead to failure (maybe marketing decisions, but we don't hear much about those). On the safety front, failures during development could be due to disregarding or misinterpreting animal tox results, human-specific effects not predicted by animal models, signals in animal carci studies which may or may not generalize to humans but which kill the compound anyway, rare effects that only become apparent in large scale phase 3 studies... you name it. Unfortunately, I think that the amount of knowledge that we acquire from each of these failures is limited, since a lot of the data remains confidential; pharma companies used to keep databases and build work groups looking back retrospectively at the data to try to grow internal knowledge and improve predictability of safety data, but with the current cost-cutting and resulting loss and turnover of researchers, I doubt that this is going on much anymore - and the knowledge was mostly internal to individual companies anyway. In short, I don't foresee huge strides being made in safety in the future - we'll keep plodding along by trial and error. And the regulatory agencies will keep raising the safety bar (following the will of the population), which will only make progress harder.

Permalink to Comment

7. CMCguy on January 6, 2012 11:21 AM writes...

From the R&D drug development side of things I think this does well summarize several of the major issues currently being faced. So yes there may be at some level certain ability for better choices to be made based on experience and general guidance principles (as long as careful to do so against the actual data being obtained).

However all this activity is only part of the overall drug development picture as the above propositions have to be conducted in the context of many other concerns. In particular would suggest both Regulatory and Business(Marketing) both have major influences that need to be addressed and while one might hope to deal with the above questions in isolation that can not really be done therefore a complicated path gets even more complex.

Permalink to Comment

8. Rock on January 6, 2012 11:34 AM writes...

Yes, I do agree. Why not improve your odds by working in higher probability space? Although John Wayne's comments are accurate, it is often the result of the fact that most screening decks are filled with lipophilic junk put together by Suzuki couplings and amide bonds. This situation is something we have control over. It has also been my experience, that despite the learnings over the past decade or so, chemists still love to chase potency over properties.
I do not believe in the outlier argument per se. Yes, there are outliers, but the real question is how many other compounds in similar space had to fail for that one success? We never see those numbers across the industry, although we do get a feel for this when looking at property attrition as compounds go through the various stages of development. Furthermore, if you look deeply into the drugs outside the "preferred drug space", they often have many issues including very high dose, short half-life, and significant off-target activities. Sometimes you can't avoid working in high risk space if that is what your target demands. But we can still improve our chances by understanding this early, using in vitro tests of toxicity and promiscuity, and compensating by driving down the projected clinical dose (yes, I know this is not trivial). These suggestions of course fly in the face of project goals and investors' impatience. But I would like to end with a question: Do you think the industry would be better off today if every project was given an extra year or two to deliver?

Permalink to Comment

9. RM on January 6, 2012 11:39 AM writes...

1 - The only problem I have here is that "too many" implies you have a number in mind for the "correct" amount of failures, and unless you're being overly idealistic and saying zero, I don't know what basis you're using to predict what the number of failures "should" be.

2a - I seriously doubt how much we learn about fixing clinical trial failures/disease mechanisms from clinical trial failures themselves (versus preliminary screening assays) - except perhaps learning that improvement in the biomarker doesn't actually correlate with effective treatment.

I'm also with John Wayne@4 - if you have the luxury of having hundreds of potential compounds you can take forward into clinical trials, then by all means do that discrimination. Unfortunately, most companies are happy when they get 1-2 that are clinical-trial ready.

Other than that, there's not much else to be snarky about in your post.

Permalink to Comment

10. barry on January 6, 2012 12:54 PM writes...

I'm a fan of lead compounds with logP between one and three. But a potential drug with logP=1 is likely to be restricted to the plasma compartment (Vss=150mL/kg) and will be rapidly cleared through the kidneys. That's fine if your target tissue is the blood (i.e. you want to re-invent aspirin) but doesn't get you into the game for most diseases.

Permalink to Comment

11. Morten G on January 6, 2012 1:11 PM writes...

How many compounds go back to chemistry after having gone to clinical? ~0? Isn't that the real difference between what people are doing today and what they were doing back when drug discovery was successful?

Permalink to Comment

12. SwedenCalling on January 6, 2012 2:19 PM writes...

Yes we believe. But please do not trust the actual values too much. Be they calculated or experimental. I am certain that a significant number of cpds tested to be in the 1-3 interval at BigPharmaX would fall into the 4-6 category at BigPharmaY. Lipophilicity is for sure a bad guy, but hard cut offs can be draconian.

Permalink to Comment

13. John Wayne on January 6, 2012 2:25 PM writes...

Reading the comments, here is a summary of potential methods to get more success:

1. Fill your sample collection with molecules you want to see as hits.

2. After hit validation, preliminary SAR and profiling is completed hard questions should be asked before heavy optimization is started. Examples: Is this lead a piece of crap? What would it take to fix it? Should we walk away right now? (it has been my experience that dropping projects early is often a good idea, but goes against corporate 'must succeed' culture).

3. Give the chemists more time to come up with a clinical candidate (really drill down on a series, pursue multiple series and race them in your PD and tox models at the end, etc.)

4. Use clinical results to inform future chemistry efforts on the same project.

I really like 1 and 2. Idea 3 is good, but may be better phrased as 'give the chemists as long as it takes to make something that isn't crap disguised to squeek by a yearly goal.' I don't have any personal experience with 4.

Permalink to Comment

14. smurf on January 6, 2012 2:37 PM writes...

To answer your questions: yes - to all of them.

Permalink to Comment

15. Boghog on January 6, 2012 3:03 PM writes...

@ SwedenCalling:
Beware of focusing on the trees and losing sight of the forest. Yes, ClogP estimates can be unreliable, but I don't think they are quite as unreliable as you imply. Nevertheless I would agree that it is important to experimentally determine partition coefficients and if necessary, use this data to reparameterize the ClogP estimates.

Permalink to Comment

16. milkshake on January 6, 2012 4:07 PM writes...

1) Blockbuster mentality of big pharma management that pushes projects based on wishful thinking and projections made to please Wall Street 2) Over-reliance on target-driven rational drug design done by the flowchart - with too many artificial screening funnels and too late use of realistic animal models

Permalink to Comment

17. SwedenCalling on January 6, 2012 5:04 PM writes...

@boghog. I think you got my point but just to be sure. ClogP are often considered to be 'the truth' by MedChemists but the predictions can, on occasion, be way of. But the results should never differ between companys. Exp logP/D on the otherhand can differ more than one would like to believe, when run at different places (and with slightly different protocols)

Permalink to Comment

18. Pete on January 6, 2012 9:10 PM writes...

1: Agreed! However, getting all the attrition to occur in Phase 1 would represent a big step forward.

2: Agreed!

2a: If the efficacy failure is due to a bad (i.e. no linkage to human disease or target-related tox) target then I’d argue that the problem was un-fixable and the project should be put out of its misery as swiftly and humanely as possible. When the target is remote from the circulation (e.g. intracellular or on far side of blood-brain barrier) then efficacy failure may be due to inadequate free levels of drug in the target compartment. Measurement of free levels of drug at intracellular locations is not something that we can currently do for an arbitrary compound.

2b: I believe that Pharma/Biotech industry should be taking more notice phenotypic assays. However, Lead Optimisation is likely to require knowledge of SAR against specific target(s) and the regulatory authorities will also want to know what the drug is hitting.

3: Agreed!

3a: Idiosyncratic toxicity is particularly bad because it is, almost by definition, unpredictable. When the adverse drug reaction (so much more palatable a term than toxicity) is observed for 5% of the patient population then a prediction that the drug will not be toxic will be correct 95% of the time. It’s always a good idea to challenge (ask nicely) your Safety colleagues to show the link between human toxicity and the assays that they use to assess Drug Safety.

3b: I accept that there are links between toxicity and the presence of substructural elements such as nitro groups and I’d be worried about any series in which a nitro group was an absolute requirement for potency. The links between high lipophilicity and bad outcomes are not as clear as some data analysis might suggest. In particular, one should be especially wary when those presenting the data analysis are reluctant to share or plot the untransformed data. The trends present in large data sets can be highly significant without being especially strong.

Permalink to Comment

19. Rock on January 7, 2012 12:50 AM writes...

@Barry #10
I see nothing wrong with a drug with a LogP of 1. Unless it is an acid, the Vdss would more likely be in the vicinity of 1 which is hardly restricted to the plasma compartment. And as for renal clearance, assuming passive transport, the GFR is only about 10% of hepatic blood flow. I would take that profile any day.

Permalink to Comment

20. Bernard Munos on January 7, 2012 3:26 AM writes...

I don't cringe too much at failure in phase I and II, as some of that is to be expected. It's part of the research process. But failure in phase III is far more problematic, especially when due to lack of efficacy and safety since no compound should move into phase III without solid evidence of both. Failure rates are well documented in the literature. (See for instance "Phase III and submission failures: 2007-2010", NRDD, vol 10, Feb 2011; Ledford H, 4 ways to fix the clinical trial, Nature Vol 477, 9/29/11; Phase II failures: 2008-2010, NRDD vol 10, May 2011.) There is also data from the EMA showing that half of phase III trials fail for lack of efficacy or safety, and, 40% of what eventually reaches regulators is never approved for the same reasons. This translates into an attrition rate of 70% for phase III through regulatory decision!

Moreover, failure rates have dramatically increased in the last 20 years. In 1990, for instance, phase III failure rates were only 20% (see Ledford above). No matter how one dices the data, the message is very disturbing: there is a very large number of clinical trials, possibly the majority, that should have never have been started. They involve compounds that deliver clinical benefits that are far too tenuous to justify putting patients at risk. Even if they complete phase III and get approved, they face payers' and physicians' skepticism, and costly marketing wars to carve out a space from look-alike competitors. The fact that many billions of dollars are spent each year on such trials speaks volumes about the sad state of current pharmaceutical research. Under pressure from eroding revenues, and unable to harness their in-house creative talent, many (but not all) pharmaceutical companies have resorted to moving into advanced trials 'safe' low-quality, or poorly-researched candidates that are doomed to fail, and are indeed failing. CEOs may parade the breadth of their pipelines on Wall Street. This may create illusions (and delusions), but it does not create new drugs. CEOs would be better off returning their companies to a true innovation culture, but for many this would mean undoing their own legacy, or that of their predecessors to whom they owe their jobs.

Permalink to Comment

21. petros on January 7, 2012 4:30 AM writes...

This point was repeatedly made over the past 2 days at this meeting
http://www.thesgc.org/events/symposia/DrugDiscov2012

The speakers showed several variations of the data but ca 50% of phase II failures were attributed to lack of efficacy.

Interestingly one speaker suggested kinase inhibitors have a better track record of making it through the clinic than most types of drugs (although the context was probably oncology given p38 inhibitors track record).

Permalink to Comment

22. Hopeless on January 7, 2012 11:36 AM writes...

It's impossible for those who still live in the past to understand the past failures...

Permalink to Comment

23. Boghog on January 7, 2012 11:41 AM writes...

@ SwedenCalling:
Thanks for the clarification. I agree that there are significant errors associated with both experimental measurements of partition coefficients as well as the ClogP estimates. One should be skeptical of both, particularly when they disagree. We use contract labs for the experimental estimates and when there are large discrepancies between the experimental and calculated values, we repeat the experimental measurements several times. Often we find that the replicate measurements are more in-line with the ClogP estimates Where the experimental measurements consistently differ the ClogP estimates, we use the experimental measurements to reparameterized ClogP.

We have had reasonably good luck with the pH-meteric method. What is your experience?

Permalink to Comment

24. Realist on January 7, 2012 4:15 PM writes...

I agree with Derek’s post, but I’m sick to death of hearing argument 3 bandied back and forth between two groups of evangelists, one insisting “thou shalt not make a compound with ClogP 4.21� as if they think any doubter hasn’t heard them a hundred times already, and the other insisting “but project X delivered a drug with ClogP 4.6� as if that constitutes statistical proof. Yes, there are real trends in the data that should be considered. But all too often it isn’t possible to start from the perfect lead, and black-and-white arguments like that put by @8 Rock miss that point. It’s the easy answer to think of filling the screening collection with “high probability� hits but by definition this reduces its diversity, and could result in many projects finding no leads at all. So the real choice is whether to abandon targets where all that’s found are “outlier� leads. Is that wise? It depends on so many other factors – the value and novelty of the target, the unmet need in the disease, quality of target validation, the aim of the program (ie does it want a candidate or is it really looking for a tool to validate the target in cells or animal models) and doubtless 100 other factors. Just like lead opt, this is a multi-factorial question and to try to make rules or treat it on anything other than case-by-case as many want to do seems pretty naïve to me.

Permalink to Comment

25. Realist on January 7, 2012 4:17 PM writes...

Yuck, what happened to my ' and " characters!

Permalink to Comment

26. pete on January 7, 2012 7:01 PM writes...

@25 -- yeah, for a minute I thought it was laced with olde English and cryptography

good points, though

Permalink to Comment

27. Twelve on January 8, 2012 4:39 PM writes...

You miss two important categories: 1) Efficacy and safety individually seem OK but their ratio is not acceptable, and more importantly, 2) Safety,efficacy and their ratio are deemed acceptable, but the agencies require more long-term data for registration, especially outcomes or survival data, depending on the indication.

Permalink to Comment

28. cynical1 on January 8, 2012 5:48 PM writes...

I'm not sure I agree with your assessment, Derek. Reasonable points but it's oversimplified. Personally, though I haven't done any statistical analysis (nor am I qualified to do so), I feel that there has been a disproportionate amount of effort in the past 15 years in therapeutic areas which already had blockbuster drugs on the market which has contributed to our high failure rate. Why have kinase inhibitors yielded novel agents in oncology? Because they don't have to work very well or be very safe. And generally, they don't and they aren't.

Marketing groups dictated our scientific/therapeutic directions based on their hopelessly flawed assessment of potential sales. And, of course, that was all retrospective analysis based on sales of already marketed drugs. Well, that's just stupid. How do you know what the market potential of a drug for ALS is if there aren't any drugs to treat the condition? Granted, it's not a huge patient base but that's also because most of them die within four years. Find a drug that allows them to live 10-20 years from disease onset and your patient group just tripled or more. Of course, that simple concept is beyond the reasoning power of the marketing groups I've encountered. They count pharmacy sales and only pharmacy sales of existing agents. It's really easy so you don't have to be very smart to do it and you can buy the data and sit on your ass the rest of the year. And BTW, ALS patients couldn't give a rat's ass about the cLog P or side effects that another disease would never tolerate.

Our industry would not be in this situation if it had focused on unmet medical need and not market potential. If you're making patients with inadequate treatment options get better, the money would be flowing. Look at the success stories in the industry. They're usually from companies going where no company had gone before therapeutically. Ask yourself how many companies targeted cystic fibrosis. Kudos to Vertex for doing so but they're in the minority.

For what it's worth, I have seen this changing now. But the long development cycles in our industry has made it all too late I'm afraid.

Permalink to Comment

29. cliffintokyo on January 9, 2012 3:19 AM writes...

To a first approx. I would not argue with your post, but the caveat is that most generalizations are dangerous, and especially in drug discovery.
To pick up on the example of a nitro group, in a given therapeutic category, with a specialized target, it might turn out to that 'nitro' is a disguised prodrug that gets to the target where it is metabolized, whereas the active entity has the wrong logP or some other adverse physchem property that e.g. prevents it from penetrating the cell membrane, or somesuch.
Med Chemists need to be able to imagine all kinds of scenarios for their compounds in order to be successful. Probably only a prepared mind can recognize serendipity when it occurs, and take full advantage of it.

Permalink to Comment

30. Tuomas Pylkkö on January 9, 2012 4:20 AM writes...

"Granted, it's not a huge patient base but that's also because most of them die within four years. Find a drug that allows them to live 10-20 years from disease onset and your patient group just tripled or more. "

Except that ALS prevalence is at 1-2 / 100 000. Three times zero is still zero. Otherwise you have a nice point.

Permalink to Comment

31. Tuomas Pylkkö on January 9, 2012 4:21 AM writes...

"Granted, it's not a huge patient base but that's also because most of them die within four years. Find a drug that allows them to live 10-20 years from disease onset and your patient group just tripled or more. "

Except that ALS prevalence is at 1-2 / 100 000. Three times zero is still zero. Otherwise you have a nice point.

Permalink to Comment

32. Robert on January 9, 2012 5:49 AM writes...

I think one of the problems associated with failures due to lack of efficacy is an unwillingness to invest in mechanistic studies in humans (early on).

People have a theory, the compound looks good in the animal model (or perhaps they can come up with another theory to explain an unexpected result), and then the rush (and more importantly pressure) is on to generate the absolute minimal safety and efficacy signal to get the compound into patients.

All too often it looks like it works, or it looks like it does not work, but people are not sure how or why (especially if the MoA or target is truly novel); companies talk about disease understanding and experimental medicine, but when it comes to spending money on anything that does not directly move compound X forward that is another story.

Permalink to Comment

33. cynical1 on January 9, 2012 9:34 AM writes...

@ Tuomas: The prevalence (which is not the incidence that you quote) of ALS in the US is 5 per 100,000. With ~300 million people that's about 15,000 patients. Increase the prevalence three fold by increasing lifespan and you get 45,000. Now charge those 15,000 patients $50,000 per year for their drug (and you'd get amazing uptake) and you get $750 million in US sales alone. Not lipitor but I'd take the money. So three times zero evidently equals $3/4 billion.

Permalink to Comment

34. CialisizeMe on January 9, 2012 12:16 PM writes...

It's the money that is the incentive to perform Phase3 trials. Phase3 trials directly result in lots of payments such as:
-More investor money (VC/Wall St)
-Higher stock valuation
-Bonuses all around
-Interest from big pharma partners (for biotechs)

The goal for (many) Phase3 trials is not the successful completion, just the *starting* of them to trigger more $$$$. Bernard Munoz also alluded to this in more subtle terms.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Scripps Update
What If Drug Patents Were Written Like Software Patents?
Stem Cells: The Center of "Right to Try"
Speaking of Polyphenols. . .
Dark Biology And Small Molecules
How Polyphenols Work, Perhaps?
More On Automated Medicinal Chemistry
Scripps Merging With USC?