You're running a drug company, and you have a new product coming out. How much of it do you expect to sell? That sounds like a simple question to answer, but it's anything but, as a new paper in Nature Reviews Drug Discovery (from people at McKinsey, no less) makes painfully clear.
Given the importance of forecasting, we set out to investigate three questions. First, how good have drug forecasts been historically? And, more specifically, how good have estimates from sell-side analysts been at predicting the future? Second, what type of error has typically been implicated in the misses? Third, is there any type of drug that has been historically more easy or difficult to forecast?
The answer to the first question is "Not very good at all". They looked at drug launches from 2002-2011, a period which furnished hundreds of sales forecasts to work from. Over 60% of the consensus forecasts were wrong by 40% or more. Stop and think about that for a minute - and if you're in the industry, stop and think about the times you've seen these predictions made inside your own company. Remember how polished the PowerPoint slides were? How high-up the person was presenting them? How confident their voice was as they showed the numbers? All for nothing. If these figures had honest error bars on them, they'd stretch up and down the height of any useful chart. I'm reminded of what Fred Schwed had to say in Where Are the Customers' Yachts about stock market forecasts: "Concerning these predictions, we are about to ask: 1. Are they pretty good? 2. Are they slightly good? 3. Are they any damn good at all? 4. How do they compare with tomorrow's weather prediction you read in the paper? 5. How do they compare with the tipster horse race services?".
As you can see from the figure, the distribution of errors is quite funny-looking. If you start from the left-hand lowball side, you think you're going to be looking at a rough Gaussian curve, but then wham - it drops off, until you get to the wildly overoptimistic bin, which shows you that there's a terribly long tail stretching into the we're-gonna-be-rich category. This chart says a lot about human psychology and our approach to risk, and nothing it says is very complimentary. In case you're wondering, CNS and cardiovascular drugs tended to be overestimated compared to the average, and oncology drugs tended to be underestimated. That latter group is likely due to an underestimation of the possibility of new indications being approved.
Now, those numbers are all derived from forecasts in the year before the drugs launched. But surely things get better once the products got out into the market? Well, there was a trend for lower errors, certainly, but the forecasts were still (for example) off by 40% five years after the launch. The authors also say that forecasts for later drugs in a particular class were no more accurate than the ones for the first-in-class compounds. All of this really, really makes a person want to ask if all that time and effort that goes into this process is doing anyone any good at all.
Writing at Forbes, David Shaywitz (who also draws some lessons here from Taleb's Antifragile) doesn't seem to think that it is, but he doesn't think that anyone is going to want to hear about it:
Unfortunately, the new McKinsey report is unlikely to matter very much. Company forecasters will say their own data are better, and will point to examples of forecasts that happen to get it right. They will emphasize the elaborate methodologies they use, and the powerful algorithms they employ (all real examples from my time in the industry). Consultants, too, will continue to insist they can do it better.
And indeed, one of the first comments that showed up to his piece was from someone who appears to be doing just that. In fact, rather than show any shame about these numbers, plenty of people will see them as a marketing opportunity. But why should anyone believe the pitch? I think that this conclusion from the NRDD paper is a lot closer to reality:
Beware the wisdom of the crowd. The 'consensus' consists of well-compensated, focused professionals who have many years of experience, and we have shown that the consensus is often wrong. There should be no comfort in having one's own forecast being close to the consensus, particularly when millions or billions of dollars are on the line in an investment decision or acquisition situation.
The folks at Popular Science should take note of this. McKinsey Consulting has apparently joined the "War on Expertise"!