Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Unknown Heterocyles: Destined to Remain That Way? | Main | Budgets and Revenues »

November 13, 2012

Nassim Taleb on Scientific Discovery

Email This Entry

Posted by Derek

There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).

Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.

So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).

Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.

An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.

Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.

The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.

Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.

Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.

(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.

(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.

And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.

(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).

(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.

At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.

(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.

(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.

(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.

Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.

Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.

I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why


COMMENTS

1. Anon on November 13, 2012 10:29 AM writes...

I think you should modify that comparison to include one more situation.

fortunate discoveries vs. planned discoveries vs. incremental advances in a known direction

Before the discovery phase you might think of these experiments as

not likely vs. unlikely but worth looking into vs. very likely

and then to me it makes sense that the "not likely" experiment that results in a "fortunate discovery" is going to be the loudest event to happen in that field.

Permalink to Comment

2. NoDrugsNoJobs on November 13, 2012 11:17 AM writes...

This was a really thoughtful post, thanks. Working for decades with medchemists I have come to appreciate their often rigorous yes-no type logic circuits and I have often intuited that we all too often close ourselves off to the true paths to discovery. In a sense, I so many times have thought that at the very bottom, we really know very little and pretending otherwise can close us to possibilities of continued discovery. In other words, our own logical expectations and models that are often based on a few data points that we have chosen to connect in a certain way rather than various other possibilities can blind us. The great thing is that at the end of it all we are an empirical science so we are always continuing to build our models and revise our models and success can be obtained ultimately - perhaps, despite our models. However, what you posit here has the potential to take our yields beyond the current scale through an ab initio recognition that we really don't know and thus design our experiments to yield maximum information. I think I like it.

Permalink to Comment

3. barry on November 13, 2012 11:52 AM writes...

as to point one "...you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for..." This is exactly what Big Pharma has not done over the last fifteen years. Ever more tests early on (hERG binding, photosensitivity, Caco2...) try to predict the outcome but make every trial more expensive.

Permalink to Comment

4. DCRogers on November 13, 2012 12:10 PM writes...

The entire field would be better off if we followed a slightly-altered version of (7): "Don't be afraid to release negative results".

It's a tragedy-of-the-commons problem, really -- there's no advantage for me to publish my own negative results, but there's lots of advantage for everyone if everyone does.

Permalink to Comment

5. Curious Wavefunction on November 13, 2012 12:54 PM writes...

Great post! Trying to increase your knowledge about what you are looking for doesn't work for a simple reason; a lot of times you don't *know* what you are looking for. That's why we need to have more curiosity-driven, exploratory research even for discovering new drugs.

Permalink to Comment

6. Anonymous on November 13, 2012 12:56 PM writes...

The bit about complexity reminded me - a former boss of mine had a BS in biology at a company where almost everyone at his level was a PhD chemist. He used to come up with the most simple, obvious ideas that no one else thought to try, and made the company lots of money on several occasions. The PhD's came up with ideas that were much more scientifically interesting than his, but seldom actually led to a new product!

Permalink to Comment

7. PTM on November 13, 2012 2:07 PM writes...

"But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones."

I don't. I see planned incremental advancements dominating by far.

Permalink to Comment

8. squib on November 13, 2012 2:16 PM writes...

"Characterize everything"
-EJ Corey

Permalink to Comment

9. Chemjobber on November 13, 2012 3:48 PM writes...

@8: Was that an ironic reference?

Permalink to Comment

10. Hap on November 13, 2012 4:15 PM writes...

Evolution, though, used (uses) an awful lot of time to go where it does. Time isn't a selection pressure, in most cases. Even in science, finding interesting things is (or used to be?) not particularly time-pressured - and when it is, it doesn't work as well (like rushing a three-year-old).

Drug design, or other businesses dependent on scientific advances are different in that they have a limited timeframe - they are partly limited in the numbers of shots they can take because they only have so much money, and investors want their money to make money at a given rate. I don't know if I should take home the same lesson as from rushing children (because you can't rush finding things or you won't find anything), or if there is a real problem in that businesses have to make some choices in where to look for solutions and they can't succeed.

Permalink to Comment

11. RM on November 13, 2012 4:38 PM writes...

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones.

Is that really the case, or is it just that fortunate discoveries tend to dominate the narrative?

For example, "Botanist on holiday in the Caribbean discovers new orchid species in his hotel room bouquet" makes the evening news and is a nice story that is sure to be brought up whenever the botanist or the orchid is discussed, whereas something like "Botanist and four grad students spend six months cataloging plants on island chain, discover ten new orchids, six new lilies, and three new irises" makes a specialist journal and is promptly forgotten.

Even existing narratives tend to emphasize the luck aspect of a discovery, and downplay the planned portions. Again, take Alexander Fleming. If you go by the common narrative, he was a messy scatterbrain, rather than a careful observer who got lucky. I bet if you dig into other serendipity narratives, you'll see the same thing. A small moment of luck which overshadows the planning and hard work needed to make it worth talking about.

That said, I'm not necessarily going to counter the main thrust of Taleb's argument, which appears to boil down to "rapidly try a bunch of stuff, as you never know what's going to work."

Permalink to Comment

12. Josh on November 13, 2012 4:51 PM writes...

Reading through the full article, a line strikes me as incomplete:
"...genetic mutations come at no cost and are retained only if they are an improvement."

This does not take into account two items that are becoming very important to understanding disease: 1)de-novo mutations and 2) epigenetic mutations from environment.

The optionality of de-novo/epigenetic mutations does not exist and these can be (and usually are) passed on to offspring - think schizophrenia, depression, bipolar.

I am just starting to read Taleb's material, so I dont know if he covers analogies like these elsewhere, but I think it may be of benefit for a mind such as his to take a swing at it.

Permalink to Comment

13. David Formerly Known as a Chemist on November 13, 2012 5:28 PM writes...

Oh my god, trying to read that article produced a severe brain cramp! Taleb may be a brilliant man, but he obviously uses his thesaurus liberally when writing. It's one thing to have unique insights, quite another to be able to effectively communicate them.

"The convexity bias, unlike serendipity et al., can be defined, formalized, identified, even on the occasion measured scientifically, and can lead to a formal policy of decision making under
uncertainty, and classify strategies based on their ex ante predicted efficiency and projected success, as we will do next with the following 7 rules."

WTF?? Talk about "complexification". Man, give me a reaction mechanism to figure out!

Permalink to Comment

14. anonymous on November 13, 2012 6:52 PM writes...

Maybe I'm stupid, but what the hell is Talebian? To quote the movie Cabin Boy, "I don't speak Spanish.... must be a fancy word for chum."

Permalink to Comment

15. JT on November 13, 2012 7:47 PM writes...

Derek + posters: for those interested, here (http://www.mdpi.com/2073-4425/2/4/998) is a nice review on observations where the principles of "antifragility" can be seen in the context of cellular biology/engineering.

Permalink to Comment

16. Genom on November 14, 2012 2:09 AM writes...

I liked the math in Nssim talbe's article (if you follow the link). His conclusions are just spicy narrative however, for example:
"In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives."
Medicine and engineering are not opaque. And incremental advancement, and directed chance, is really how we got from A->B----->Z. Pasteur and Koch were very methodical, and without them, and their assistants, there wouldn't be bacterial or fungal cultures at all. So - if you're an investor - "choose wisely". Risk is good, but relying on blind luck and nonlinearlity to make the discoveries is ridiculous.

Permalink to Comment

17. milkshake on November 14, 2012 4:27 AM writes...

I guess he is talking to business people who pretend to understand drug discovery and claim to possess a sure-fire reorganization scheme for introducing four INDs per year, every year. Who know in advance which projects will result in the next blockbuster drug...

I wish he could cut the pretentious gorp and write like normal people do, when popularizing science concepts to nonexperts, but I guess he would not be taken seriously by his target audience if his writings were short and to the point

Permalink to Comment

18. Nekekami on November 14, 2012 7:18 AM writes...

In response to your use of the Luck Favours the Prepared mindset, keep in mind one of the truisms in military science: A plan never survives first contact with the enemy, which can be understood as "No matter how much and what you plan for, there will always be something that will surprise you, and you need to be able to adapt to that"

That is in fact what separates good officers from the mediocre or bad, and has strong similarities with engineering: A good understanding of theories, but also a good understanding of the real world, and the ability to make it all work...

Also reminds me of the sign in the wood&metal crafts classroom:
"Theory is when nothing works and everyone knows why. Practice is when everything works and noone knows why. In this room we combine the two: Nothing works, and noone knows why"

Permalink to Comment

19. noko marie on November 14, 2012 7:31 AM writes...

"Better lucky than good!"

Permalink to Comment

20. ogirf m. on November 14, 2012 8:17 AM writes...

I add a short prayer for some of my most critical experiments.

Permalink to Comment

21. SteveM on November 14, 2012 8:31 AM writes...

Re: #20. ogirf m.

When I was in college, a Chem grad student had a list of tactics posted on the door. Including:

"Don't just pray for miracles, depend on them."

P.S. along with "First draw the curve you want and then plot the data points." (pre-computer)

Permalink to Comment

22. Dr. Manhattan on November 14, 2012 12:48 PM writes...

"Maybe I'm stupid, but what the hell is Talebian?"

See the post directly above yours. Talebian is the liberally sprinkling of as many different thesaurus terms to make a clear statement as obscure and seemingly erudite as possible.

Permalink to Comment

23. Jim on November 14, 2012 5:34 PM writes...

I've read (OK, I've read about half of) The Black Swan. As intriguing as his work is, I want to send him the best bumper sticker I've ever seen:

Eschew obfuscation

He might be the first person to not get the joke because he can't think of simpler synonyms.

Permalink to Comment

24. anonymous on November 14, 2012 7:39 PM writes...

@22:

My Post #14

Sorry. My sense of humor doesn't come across well when typed.

Permalink to Comment

25. Dtx on November 21, 2012 1:06 PM writes...

If you haven't read, I highly recommend Taleb's books mentioned by Derek, i.e., The Black Swan & Fooled by Randomness. Taleb is highly critical of the field of risk assessment, but in an very insightful way. I teach risk assessment to grad students & think Taleb's perspectives are invaluable. (I work in pharma as well).

Despite that I'm a biologist at heart, Taleb even gave me a new perspective on evolution, i.e., it's survival of the fittest under the CURRENT conditions. The less common a species experiences an extreme event, the less well they can handle it (similarly, he points out the same is true with our financial systems - models can work well under current. Then an unexpected event causes the models to crash).

He gives the example of an island with 2 species. The most fit species forces the least fit one out of the lush lowlands and into the barren mountains. The most fit species prospers in the lowlands. Then a Tsunami comes by and wipes out the "most fit species."

Peter Bernstein, the author of the best book on risk "Against the gods," said that you have to stick with "Fooled by randomness." He's right. At first Taleb is slightly annoying, but once you get into the book, you'll find it gives you completely new insight into the world. I've not yet read Antifragile, but I suspect Derek's long post a reaction to how Taleb can stimulate one's thinking.

Yes, Taleb takes an investment of time, but when you are done, you'll see the world from a different perspective.

Permalink to Comment

26. x on November 24, 2012 4:28 PM writes...

Another clumsy and sophomoric walk through the philosophy of science, and the scientific method, with Nassim Taleb. I have no idea how this man gets an audience. Pr(Black Swan) = Pr(Swans)*Pr(Similar animals come in different colours). Read Hume, Laplace, Ramsey, Di Finetti, or Wesley Salmon as a good introduction.

Permalink to Comment

27. Big Freddie on November 25, 2012 5:33 PM writes...

Hmm...but putting more eggs in the "semi big science" basket, multisite NSF grants, "Centers of Some Excellence", modENCODE or "putting all the money at the Too Big To Fail universities" etc. makes sense...Money, Fear and Power are driving U.S. science to go whale fishing with goldfish nets....maybe more labs need to get $100,000 grants and fewer labs $1 million grants...right. I feel bad for med schools, mostly has beens or never was-es but everyone can point to C.V.s that harvested 5 to 12 million bucks from the federal coffers...too bad it all ended up with such poor citation rates.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Gitcher SF5 Groups Right Here
Changing A Broken Science System
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face