There's a new paper in Nature Reviews Drug Discovery that tries to find out what factors about a company influence its research productivity. This is a worthy goal, but one that's absolutely mined with problems in gathering and interpreting the data. The biggest one is the high failure rate that afflicts everyone in the clinic: you could have a company that generates a lot of solid ideas, turns out good molecules, gets them into humans with alacrity, and still ends up looking like a failure because of mechanistic problems or unexpected toxicity. You can shorten those odds, for sure (or lengthen them!), but you can never really get away from that problem, or not yet.
The authors have a good data set to work from, though:
It is commonly thought that small companies have higher research and development (R&D) productivity compared with larger companies because they are less bureaucratic and more entrepreneurial. Indeed, some analysts have even proposed that large companies exit research altogether. The problem with this argument is that it has little empirical foundation. Several high-quality analyses comparing the track record of smaller biotechnology companies with established pharmaceutical companies have concluded that company size is not an indicator of success in terms of R&D productivity1, 2.
In the analysis presented here, we at The Boston Consulting Group examined 842 molecules over the past decade from 419 companies, and again found no correlation between company size and the likelihood of R&D success. But if size does not matter, what does?
Those 842 molecules cover the period 2002-2011, and of them, 205 made it to regulatory approval.
(Side note: does this mean that the historical 90% failure rate no longer applies? Update: turns out that's the number of compounds that made it through Phase I, which sounds more like it). There were plenty of factors that seemed to have no discernable influence on success - company size, as mention, public versus private financing, most therapeutic area choices, market size for the proposed drug or indication, location in the US, Europe, or Asia, and so on. In all these cases, the size of the error bars leave one unable to reject the null hypothesis (variation due to chance alone).
What factors do look like more than chance? The far ends of the therapeutic area choice, for one (CNS versus infectious disease, and these two only). But all the other indicators are a bit fuzzier. Publications (and patents) per R&D dollar spent are a positive sign, as is the experience (time-in-office) of the R&D heads. A higher termination rate in preclinical and Phase I correlated with eventual success, although I wonder if that's also a partial proxy for desperation, companies with no other option but to push on and hope for the best (see below for more on this point). A bit weirdly, frequent mention of ROI and the phrase "decision making" actually correlated positively, too.
The authors interpret most or all of these as proxy measurements of "scientific acumen and good judgement", which is a bit problematic. It's very easy to fall into circular reasoning that way - you can tell that the companies that succeeded had good judgement, because their drugs succeeded, because of their good judgement. But I can see the point, which is what most of us already knew: that experience and intelligence are necessary in this business, but not quite sufficient. And they have some good points to make about something that would probably help:
A major obstacle that we see to achieving greater R&D productivity is the likelihood that many low-viability compounds are knowingly being progressed to advanced phases of development. We estimate that 90% of industry R&D expenditures now go into molecules that never reach the market. In this context, making the right decision on what to progress to late-stage clinical trials is paramount in driving productivity. Indeed, researchers from Pfizer recently published a powerful analysis showing that two-thirds of the company's Phase I assets that were progressed could have been predicted to be likely failures on the basis of available data3. We have seen similar data privately as part of our work with many other companies.
Why are so many such molecules being advanced across the industry? Here, a behavioural perspective could provide insight. There is a strong bias in most R&D organizations to engage in what we call 'progression-seeking' behaviour. Although it is common knowledge that most R&D projects will fail, when we talk to R&D teams in industry, most state that their asset is going to be one of the successes. Positive data tends to go unquestioned, whereas negative data is parsed, re-analysed, and, in many cases, explained away. Anecdotes of successful molecules saved from oblivion often feed this dynamic. Moreover, because it is uncertain which assets will fail, the temptation is to continue working on them. This reaction is not surprising when one considers that personal success for team members is often tied closely to project progression: it can affect job security, influence within the organization and the ability to pursue one's passion. In this organizational context, progression-seeking behaviour is entirely rational.
Indeed it is. The sunk-cost fallacy should also be added in there, the "We've come so far, we can't quit now" thinking that has (in retrospect) led so many people into the tar pit. But they're right, many places end up being built to check the boxes and make the targets, not necessarily to get drugs out the door. If your organization's incentives are misaligned, the result is similar to trying to drive a nail by hitting it from an angle instead of straight on: all that force, being used to mess things up.