In case you hadn't seen it, I wanted to highlight this post by Michael Gilman over at LifeSciVC. He's talking about risk in biotech, and tying it to the processes of generating, refining, and testing hypotheses. "The hypothesis", he says, "is one of the greatest intellectual creations of our species", and he's giving it its due.
I agree with him that time spent rethinking your hypothesis is often time well spent, whether for a single bench experiment or (most especially for) a big clinical trial. You need to be sure that you're asking the right question, that you're setting it up to be answered (one way or another), and that you're going to be able to get the maximum amount of useful information when that answer comes in, be it a Yes or a No. Sometimes this setup is obvious, but by the time you get to clinical trial design, it can be very tricky indeed.
For drug discovery, Gilman say, there are generally three kinds of hypothesis:
Biological hypothesis. What buttons do we believe this molecule pushes in target cells and what happens when these buttons are pushed? What biological pathways respond?
Clinical hypothesis. When these pathways are impacted, why do we believe it will move the needle on parameters that matter to patients and physicians? How will this intervention normalize physiology or reverse pathology?
Commercial hypothesis. If the first two hypotheses are correct, why do we believe anyone will care? Why will patients, physicians, and payers want this drug? How do we expect it to stand out from the crowd?
Many are the programs that have come to grief because of some sort of mismatch between these three. Clinical trials have been run uselessly because the original drug candidate was poorly characterized. Ostensibly successful trials have come to nothing because they were set up to answer the wrong questions. And ostensibly successful drug candidates have died in the marketplace because nobody wanted them. These are very expensive mistakes, and some extra time spent staring out the window while thinking about how to avoid them could have come in handy.
Gilman goes on to make a number of other good points about managing risk - for example, any experiment that shoulders a 100% share of the risk needs to be done as cheaply as possible. I would add, as a corollary, ". . .and not one bit cheaper", because that's another way that you can mess things up. At all times, you have to have a realistic idea of where you are in the process and what you're taking on. If you can find a way to do the crucial experiment without risking too much time or money, that's excellent news. On the other end of the scale, if there's no other way to do it than to put a big part of the company down on the table, then you'd better be sure that getting the answer is going to be worth that much effort. If it is, then be sure to spend the money to do it right - you're not going to get a second shot that easily.
The article also shows how you want to manage such risks across a broader portfolio. You'd like, if possible, to have plenty of programs that are front-loaded with their major risks, the sorts of things that you're not necessarily going to have to hopping around the room with crossed fingers while you're waiting for the Phase III data. It's impossible to take all the risk out of a Phase III, true - but if you can get some of the big questions out of the way earlier, without having to go that far, so much the better. A portfolio made up of several gigantic multiyear money furnaces - say, Alzheimer's or rheumatoid arthritis - will be something else entirely.