Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Toxicology | Who Discovers and Why | Why Everyone Loves Us »

February 20, 2014

The NIH Takes a Look At How the Money's Spent

Email This Entry

Posted by Derek

The NIH is starting to wonder what bang-for-the-buck it gets for its grant money. That's a tricky question at best - some research takes a while to make an impact, and the way that discoveries can interact is hard to predict. And how do you measure impact, by the way? These are all worthy questions, but here's apparently the way things are being approached:

Michael Lauer's job at the National Institutes of Health (NIH) is to fund the best cardiology research and to disseminate the results rapidly to other scientists, physicians, and the public. But NIH's peer-review system, which relies on an army of unpaid volunteer scientists to prioritize grant proposals, may be making it harder to achieve that goal. Two recent studies by Lauer, who heads the Division of Cardiovascular Sciences at NIH's National Heart, Lung, and Blood Institute (NHLBI) in Bethesda, Maryland, raise some disturbing questions about a system used to distribute billions of dollars of federal funds each year.

(MiahcalLauer recently analyzed the citation record of papers generated by nearly 1500 grants awarded by NHLBI to individual investigators between 2001 and 2008. He was shocked by the results, which appeared online last month in Circulation Research: The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores. That was the case even though low-scoring researchers had been given less money than their top-rated peers.

I understand that citations and publications are measurable, while most other ways to gauge importance aren't. But that doesn't mean that they're any good, and I worry that the system is biased enough already towards making these the coin of the realm. This sort of thing worries me, too:

Still, (Richard) Nakamura is always looking for fresh ways to assess the performance of study sections. At the December meeting of the CSR advisory council, for example, he and Tabak described one recent attempt that examined citation rates of publications generated from research funded by each panel. Those panels with rates higher than the norm—represented by the impact factor of the leading journal in that field—were labeled "hot," while panels with low scores were labeled "cold."

"If it's true that hotter science is that which beats the journals' impact factors, then you could distribute more money to the hot committees than the cold committees," Nakamura explains. "But that's only if you believe that. Major corporations have tried to predict what type of science will yield strong results—and we're all still waiting for IBM to create a machine that can do research with the highest payoff," he adds with tongue in cheek.

"I still believe that scientists ultimately beat metrics or machines. But there are serious challenges to that position. And the question is how to do the research that will show one approach is better than another."

I'm glad that he doesn't seem to be taking this approach completely seriously, but others may. If only impact factors and citation rates were real things that advanced human knowledge, instead of games played by publishers and authors!

Comments (34) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

February 11, 2014

Drug Discovery in India

Email This Entry

Posted by Derek

Molecular biologist Swapnika Ramu, a reader from India, sends along a worthwhile (and tough) question. She says that after her PhD (done in the US), her return to India has made her "less than optimistic" about the current state of drug discovery there. (Links in the below quote have been added by me, not her:

Firstly, there isn't much by way of new drug development in India. Secondly, as you have discussed many times on your blog. . .drug pricing in India remains highly contentious, especially with the recent patent disputes. Much of the public discourse descends into anti-big pharma rhetoric, and there is little to no reasoned debate about how such issues should be resolved. . .

I would like to hear your opinion on what model of drug discovery you think a developing nation like India should adopt, given the constraints of finance and a limited talent pool. Target-based drug discovery was the approach that my previous company adopted, and not surprisingly this turned out to be a very expensive strategy that ultimately offered very limited success. Clearly, India cannot keep depending upon Western pharma companies to do all the heavy lifting when it comes to developing new drugs, simply to produce generic versions for the Indian public. The fact that several patents are being challenged in Indian courts would make pharma skittish about the Indian market, which is even more of a concern if we do not have a strong drug discovery ecosystem of our own. Since there isn't a robust VC-based funding mechanism, what do you think would be a good approach to spurring innovative drug discovery in the Indian context?

Well, that is a hard one. My own opinion is that India only has a limited talent pool as compared to Western Europe or the US - the country still has a lot more trained chemists and biologists than most other places. It's true, though, that the numbers don't tell the story very well. The best people from India are very, very good, but there are (from what I can see) a lot of poorly trained ones with degrees that seem (at least to me) worth very little. Still, you've still got a really substantial number of real scientists, and I've no doubt that India could have several discovery-driven drug companies if the financing were easier to come by (and the IP situation a bit less murky - those two factors are surely related). Whether it would have those, or even should, is another question.

As has been clear for a while, the Big Pharma model has its problems. Several players are in danger of falling out of the ranks (Lilly, AstraZeneca), and I don't really see anyone rising up to replace them. The companies that have grown to that size in the last thirty years mostly seem to be biotech-driven (Amgen, Biogen, Genentech as was, etc.)

So is that the answer? Should Indian companies try to work more in that direction than in small molecule drugs? Problem is, the barriers to entry in biotech-derived drugs are higher, and that strategy perhaps plays less to the country's traditional strengths in chemistry. But in the same way that even less-developed countries are trying to skip over the landline era of telephones and go straight to wireless, maybe India should try skipping over small molecules. I do hate to write that, but it's not a completely crazy suggestion.

But biomolecule or small organic, to get a lot of small companies going in India (and you would need a lot, given the odds) you would need a VC culture, which isn't there yet. The alternative (and it's doubtless a real temptation for some officials) would be for the government to get involved to try to start something, but I would have very low hopes for that, especially given the well-known inefficiencies of the Indian bureaucracy.

Overall, I'm not sure if there's a way for most countries not to rely on foreign companies for most (or all) of the new drugs that come along. Honestly, the US is the only country in the world that might be able to get along with only its own home-discovered pharmacopeia, and it would still be a terrible strain to lose the European (and Japanese) discoveries. Even the likes of Japan, Switzerland, and Germany use, for the most part, drugs that were discovered outside their own countries.

And in the bigger picture, we might be looking at a good old Adam Smith-style case of comparative advantage. It sure isn't cheap to discover a new drug in Boston, San Francisco, Basel, etc., but compared to the expense of getting pharma research in Hyderabad up to speed, maybe it's not quite as bad as it looks. In the longer term, I think that India, China, and a few other countries will end up with more totally R&D-driven biomedical research companies of their own, because the opportunities are still coming along, discoveries are still being made, and there are entrepreneurial types who may well feel like taking their chances on them. But it could take a long longer than some people would like, particularly researchers (like Swapnika Ramu) who are there right now. The best hope I can offer is that Indian entrepreneurs should keep their eyes out for technologies and markets that are new enough (and unexplored enough) so that they're competing on a more level playing field. Trying to build your own Pfizer is a bad idea - heck, the people who built Pfizer seem to be experiencing buyer's remorse themselves.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

December 16, 2013

NIH Taking on More RIsk?

Email This Entry

Posted by Derek

You'd have to think that this is at least a step in the right direction: "NIH to experiment with high-risk grants":

On 5 December, agency director Francis Collins told an advisory committee that the NIH should consider supporting more individual researchers, as opposed to research proposals as it does now — an idea inspired in part by the success of the high-stakes Pioneer awards handed out by the NIH's Common Fund.

“It’s time to look at balancing our portfolio,” says Collins, who plans to pitch the idea to NIH institute directors at a meeting on 6 January.

The NIH currently spends less than 5% of its US$30-billion budget on grants for individual researchers, including the annual Pioneer awards, which give seven people an average of $500,000 a year for five years. In contrast, the NIH’s most popular grant, the R01, typically awards researchers $250,000 per year for 3‒5 years, and requires a large amount of preliminary data to support grant applications.

They're not going to get rid of the R01 grant any time soon, but what Collins is talking about here is getting a bit more like the Howard Hughes funding model (HHMI grants run for five years as well, and tend to be awarded more towards the PI than towards the stated projects). One problem is that the NIH is evaluating the success of the Pioneer grants by noting that the awardees publish more highly-cited papers, and that may or may not be a good measure:

But critics say that there is little, if any, evidence that this approach is superior. “'People versus projects’ is the HHMI bumper sticker, but it’s a misreading of what makes the HHMI great,” says Pierre Azoulay, an economist at the Massachusetts Institute of Technology in Cambridge. Azoulay suggests that the findings of the NIH’s 2012 report may actually reflect factors such as the impact of the HHMI’s unusually lengthy funding windows, which allow a lot of time for innovation. In contrast, he says, “the Pioneer grants are freedom with an expiration date”.

Daniel Sarewitz, co-director of the Consortium for Science, Policy and Outcomes at Arizona State University in Tempe, adds that funding individual researchers may well increase the number of publications they produce. “But that may or may not have anything to do with enhancing the NIH's potential to contribute to actual health outcomes”, such as translation of research into the clinic, he says.

I'd like for them to set aside some money for ideas that have a low chance of working, but which would be big news if they actually came through. The high-risk high-reward stuff would also have to be awarded more by evaluating the people involved, since none of them would look likely in the "Tell us exactly what results you expect" mode of grant funding. But I can say this, not having to the be the person who wades through the stacks of applications - sorting those out would probably be pretty painful.

Comments (9) + TrackBacks (0) | Category: Who Discovers and Why

December 10, 2013

Standards of Proof

Email This Entry

Posted by Derek

Here are some slides from Anthony Nicholls of OpenEye, from his recent presentation here in Cambridge on his problems with molecular dynamics calcuations. Here's his cri du coeur (note: fixed a French typo from the original post there):

. . .as a technique MD has many attractive attributes that have nothing to do with its actual predictive capabilities (it makes great movies, it’s “Physics”, calculations take a long time, it takes skill to do right, “important” people develop it, etc). As I repeatedly mentioned in the talk, I would love MD to be a reliable tool - many of the things modelers try to do would become much easier. I just see little objective, scientific evidence for this as yet. In particular, it bothers me that MD is not held to the same standards of proof that many simpler, empirical approaches are - and this can’t be good for the field or MD.

I suspect he'd agree with the general principle that while most things that are worthwhile are hard, not everything that's hard is worthwhile. His slides are definitely fun to read, and worthwhile even if you don't give a hoot about molecular dynamics. The errors he's warning about apply to all fields of science. For example,he starts off with the definition of cognitive dissonance from Wikipedia, and proposes that a lot of the behavior you see in the molecular dynamics field fits the definitions of how people deal with this. He also maintains that the field seems to spend too much of its time justifying data retrospectively, and that this isn't a good sign.

I especially enjoyed his section on the "Tanimoto of Truth". That's comparing reality to experimental results. You have the cases where there should have been a result and the experiment showed it, and there shouldn't have been one, and the experiment reproduced that, too : great! But there are many more cases where only that first part applies, or gets published (heads I win, tails just didn't happen). And you have the inverse of that, where there was nothing, in reality, but your experiment told you that there was something. These false positives get stuck in the drawer, and no one hears about them at all. The next case, the false negatives, often end up in the "parameterize until publishable" category (as Nicholls puts it), or they get buried as well. The last category (should have been negative, experiment says they're negative) are considered so routine and boring that no one talks about them at all, although logically they're quite important.

All this can impart a heavy, heavy publication bias: you only hear about the stuff that worked, even if some of the examples you hear about really didn't. And unless you do a lot of runs yourself, you don't usually have a chance to see how robust the system really is, because the data you'd need aren't available. The organic synthesis equivalent is when you read one of those papers that do, in fact, work on the compounds in Table 1, but hardly any others. And you have to play close attention to Table 1 to realize that you know, there aren't any basic amines on that list (or esters, or amides, or what have you), are there?

The rest of the slides get into the details of molecular dynamic simulations, but he has some interesting comments on the paper I blogged about here, on modeling of allosteric muscarinic ligands. Nicholls says that "There are things to admire about this paper- chiefly that a prospective test seems to have been done, although not by the Shaw group." That caught my eye as well; it's quite unusual to see that, although it shouldn't be. But he goes on to say that ". . .if you are a little more skeptical it is easy to ask what has really been done here. In their (vast) supplementary material they admit that GLIDE docking results agree with mutagenesis as well (only, “not quite as well’, whatever that means- no quantification, of course). There’s no sense, with this data, of whether there are mutagenesis results NOT concordant with the simulations." And that gets back to his Tanimoto of Truth argument, which is a valid one.

He also points out that the predictions ended up being used to make one compound, which is not a very robust standard of proof. The reason, says Nicholls, is that molecular dynamics papers are held to a lower standard, and that's doing the field no good.

Comments (9) + TrackBacks (0) | Category: In Silico | Who Discovers and Why

November 7, 2013

Organizing Research

Email This Entry

Posted by Derek

Here's an article in Angewandte Chemie that could probably have been published in several other places, since it's not specifically about chemistry. It's titled "The Organization of Innovation - The History of an Obsession", from Caspar Hirschi at St. Gallen in Switzerland, and it's a look at how both industrial and academic research have been structured over the years.

He starts off with an article fromThe Economist on the apparent slowdown in innovation. This idea has attained wider currency in recent years (Tyler Cowen's The Great Stagnation is an excellent place to start, although it's not just about innovation). I should note that the Economist article does not buy into this theory. Hirschi dissents, too, but from another direction:

Despite what the authors would have us believe, the “innovation blues” lamented in the The Economist have little to do with the current course of technological development. The source of the perceived problem arises instead from a sense of disappointment over the fact that their innovation theory does not hold up to its empirical promise. The theory is made up of a chain of causation that sees science as the most important driving force behind innovation, and innovation as the most important driving force for the economy, and an organizational principle maintaining that the three links in the chain function most efficiently under market-oriented com- petition.

The problem with this theory is that only the first part of its causal chain stands up to empirical scrutiny. There is every indication that progress in scientific knowledge leads to technical innovations, but it is highly unlikely that a higher level of innovation results in greater prosperity. . .

He then goes back to look at the period from 1920 to 1960, which is held up by many who write about this subject as a much more fruitful time for innovation. Here's the main theme:

A comparison between these previously used formulas with those used today strongly suggests that our greatest stumbling block to innovation is our theory-based obsession with innovation itself. This obsession has made scientists and technical experts play by rules stipulating that they can deliver outstanding results only if they are exposed to the competitive forces of a market; and if no such market exists— as in the case of government research funding—a market has to be simulated. Before 1960, the organization of innovation had hewed to the diametrically opposite principle. . .

I have a couple of problems with this analysis. For one, I think that parts of the 19th century were also wildly productive of innovation, and that era was famously market-driven. Another difficulty is that when you're looking at 1920-1960, there's the little matter of World War II right in the middle of it. The war vastly accelerated technological progress in numerous areas (aeronautics, rocketry, computer hardware and software, information science and cryptography, radar and other microwave applications, atomic physics, and many more). The conditions were very unusual: people were, in many cases, given piles of money and other resources, with the understanding that the continued existence of their countries and their own lives could very well be at stake. No HR department could come up with a motivational plan like that - at least, I hope not.

Hirschi surveys the period, though, with less emphasis on all this, but it does come up in his discussion of Kenneth Mees of Eastman Kodak, who was a very influential thinker on industrial research. It was his theory of how it should be run that led to insitutions like Bell Labs:

The key to success of an industrial laboratory, he explained, lay in the ability of its directors to recreate the organizational advantages of a university in a commercial setting. Industrial scientists ought to be given the greatest possible latitude to conduct their research as they see fit, with less outside interference, flat hierarchies within the institution, and department heads who are themselves scientists. Like professors at universities, he contended, the senior scientific staff should hold permanent appointments, and all scientists ought to have the opportunity to publish their research results.

Readers here will be reminded of the old "Central Research" departments of companies like DuPont, Bayer, Ciba and others. These were set up very much along these lines, to do "blue sky" work that might lead to practical applications down the road. It's absolutely true that the past thirty years has seen most of this sort of thing disappear from the world, and it's very tempting to assign any technological slowdown to that very change. But you always have to look out for the post hoc ergo propter hoc fallacy: it's also possible that a slowdown already under way led to the cutbacks in less-directed research. Here's more on Mees:

For Mees, industrial research was a “gamble”, and could not be conducted according to the rules of “efficiency engineering”. Research, he insisted, requires a great abundance of staff members, ideas, money, and time. Anyone who is unwilling to wait ten years or more for the first results to emerge has no business setting up a laboratory in the first place. Mees established the following rule for the organization of scientific work: “The kinds of research which can be best planned are found to be those which are least fundamental.” But because Mees regarded the basic sciences as the most important source of innovation, he advised research directors to try not to rein in their scientists with assignments, but instead to inspire them with questions.

I don't find a lot to argue with in that sort of thinking, but that might be because I like it (which doesn't necessarily mean that it's true). I hope it is, and I would rather live in a world where it is, but things don't have to be that way. I do think, though, very strongly, that the application of what's called "efficiency engineering" to R&D is a recipe for disaster. (See here, here, here, here, and here for more). And there are people in high places who apparently agree.

The Ang. Chem. article goes on to note, correctly, that many of these big industrial research operations were funded by monopoly (or near monopoly) profits. AT&T, IBM, Eastman Kodak and others began to use their research arms as public relations tools to argue for that status quo to continue.

Because monopolies could not be justified directly, the only route the companies in question had open to them was a detour that required more and more elaborate displays of their capacity for innovation. Once again, architecture proved to be well suited for this purpose. In the late 1950s and early 1960s, several American industrial groups built new research centers. They opted to locate them in isolated surroundings in the style of a modern university campus, and favored a new architectural style that moved away from the traditional laboratory complexes based on classic industrial buildings such as the one in Murray Hill. . .

The architecture critics of the time soon came up with an apt name for these buildings: “Industrial Versailles”. The term was fitting because the new research centers were to industrial innovation what Versailles had been to the Sun King: complexes of representation, as the historians of technology Scott Knowles and Stuart Leslie have detailed.

We actually owe a lot of our current ideas about research building design to the thoughts about what made Bell Labs so productive in the 1950s - as you keep digging, you keep finding the same roots. Even if those theories were correct, whether the later, showier buildings were true to them is open for debate.

Hirshci finishes up his piece with the 1957 Sputnik launch, which famously had a huge effect on academic science funding in the US. I only realized when I was in my 20s that my whole impression of the science facilities in my own middle and high school in Arkansas were shaped by that event. I'd sort of assumed that things like this were just always ten or twenty years old, but that was because I was seeing the aftereffects of that wave of funding, which reached all the way to the Mississippi Delta. Here's Hirschi on the effects in higher education and beyond:

The explosion of government research funding resulted in serious quandaries about how best to allocate these funds. There were countless research institutes, and there was a need for clear rationales as to which institutions and individuals would be entitled to how many dollars that came from taxes. An attempt was made to meet this challenge by introducing an element of marketlike competition. Artificial competition for project-related subsidies, to be regulated and controlled by the funding agencies, was set in motion. Successful proposals needed to provide precise details about the scope of each project and a set time frame was assigned for the completion of a given project, which made it necessary for grant seekers to package fundamental research as though it were application-oriented. This set-up ushered in a period in which innovations were proclaimed well before the fact, and talked up as monumental breakthroughs in the quest to secure funding. Representation became integral to production.

It did not take long for this new regime to have drastic reverberations for industrial research. The flood of money that inundated the research universities heightened the incentive for industrial groups to outsource costly laboratory work to universities or public research centers. At the same time, they were inspired by the public administration's belief in the rules of the market to pay heed in their own research divisions to the credo that the innovative impulse requires the intensity of a competitive situation. In the long run, the private sector did its part in making the new form of market-oriented project research the only accepted organizational principle.

To my eye, though, this whole article wraps up rather quickly like this. It seems like a reasonable short history of research organization in the mid-20th century, followed by several assertions. Hirschi's not advocating a return to the 1950s (he explicitly states this), but it's hard to say what he is advocating, other than somehow getting rid of some of what he seems to feel is unseemly competition and market-driven stuff. "The solution can only lie in the future" is a line from the last paragraph, and I hope it reads better in German.

Comments (19) + TrackBacks (0) | Category: Who Discovers and Why

October 17, 2013

Creativity Training For Creative Creators

Email This Entry

Posted by Derek

Here's a bilous broadside against the whole "creativity" business - the books, courses, and workshops that will tell you how to unleash the creative powers within your innards and those of your company:

And yet the troubled writer also knew that there had been, over these same years, fantastic growth in our creativity promoting sector. There were TED talks on how to be a creative person. There were “Innovation Jams” at which IBM employees brainstormed collectively over a global hookup, and “Thinking Out of the Box” desktop sculptures for sale at Sam’s Club. There were creativity consultants you could hire, and cities that had spent billions reworking neighborhoods into arts-friendly districts where rule-bending whimsicality was a thing to be celebrated. If you listened to certain people, creativity was the story of our time, from the halls of MIT to the incubators of Silicon Valley.

The literature on the subject was vast. Its authors included management gurus, forever exhorting us to slay the conventional; urban theorists, with their celebrations of zesty togetherness; pop psychologists, giving the world step-by-step instructions on how to unleash the inner Miles Davis. Most prominent, perhaps, were the science writers, with their endless tales of creative success and their dissection of the brains that made it all possible.

I share his skepticism, although the author (Thomas Frank) comes at the whole question from a left-wing political perspective, which is rather far from my own. I think he's correct that many of the books, etc., on this topic have the aim of flattering their readers and reinforcing their own self-images. And I also have grave doubts about the extent to which creativity can be taught or enhanced. There are plenty of things that will squash it, and so avoiding those is a good thing if creativity is actually what you're looking for in the first place. But gain-of-function in this area is hard to achieve: taking a more-or-less normal individual, group, or company and somehow ramping up their creative forces is something that I don't think anyone really knows how to do.

That point I made in passing there is worth coming back to. Not everyone who says that they value rule-breaking disruptive creative types really means it, you know. "Creative" is often used as a feel-good buzzword; the sort of thing that companies know that they're supposed to say that they are and want to be.

"Innovative" works the same way, and there are plenty of others, which can be extracted from any mission statement that you might happen to have lying around. I think those belong in the same category as the prayers of Abner Scofield. He's the coal dealer in Mark Twain's "Letter to the Earth", and is advised by a recording angel that: "Your remaining 401 details count for wind only. We bunch them and use them for head winds in retarding the ships of improper people, but it takes so many of them to make an impression that we cannot allow anything for their use". Just so.

Comments (32) + TrackBacks (0) | Category: Who Discovers and Why

August 19, 2013

An Inspirational Quote from Bernard Munos

Email This Entry

Posted by Derek

In the comments thread to this post, Munos has this to say:

Innovation cannot thrive upon law and order. Sooner or later, HR folks will need to come to grips with this. Innovators (the real ones) are rebels at heart. They are not interested in growing and nurturing existing markets beccause they want to obliterate and replace them with something better. They don't want competitive advantage from greater efficiency, because they want to change the game. They don't want to optimize, they want to disrupt and dominate the new markets they are creating. The most damaging legacy of the process-minded CEOs who brought us the innovation crisis has been to purge disrupters from the ranks of pharma. Yes, they are tough to manage, but every innovative company needs them, and must create a climate that allows them to thrive. . .

I wanted to bring that up to the front page, because I enjoy hearing things like this, and I hope that they're true.

Comments (73) + TrackBacks (0) | Category: Who Discovers and Why

August 7, 2013

Reworking Big Pharma

Email This Entry

Posted by Derek

Bruce Booth (of Atlas Venture Capital) has a provocative post up at Forbes on what he would do if he were the R&D head of a big drug company. He runs up his flag pretty quickly:

I don’t believe that we will cure the Pharma industry of its productivity ills through smarter “operational excellence” approaches. Tweaking the stage gates, subtly changing attrition curves, prioritizing projects more effectively, reinvigorating phenotypic screens, doing more of X and less of Y – these are all fine and good, and important levers, but they don’t hit the key issue – which is the ossified, risk-avoiding, “analysis-paralysis” culture of the modern Pharma R&D organization.

He notes that the big companies have all been experimenting with ways to get more new thinking and innovation into their R&D (alliances with academia, moving people to the magic environs of Cambridge (US or UK), and so on). But he's pretty skeptical about any of this working, because all of this tends to take place out on the edges. And what's in the middle? The big corporate campus, which he says "has become necrotic in many companies". What to do with it? He has several suggestions, but here's a big one. Instead of spending five or ten per cent of the R&D budget on out-there collaborations, why not, he says, go for broke:

Taken further, bringing the periphery right into the core is worth considering. This is just a thought experiment, and certainly difficult to do in practice, but imagine turning a 5000-person R&D campus into a vibrant biotech park. Disaggregate the research portfolio to create a couple dozen therapeutically-focused “biotech” firms, with their own CEOs, responsible for a 3-5 year plan and with a budget that maps to that plan. Each could have its own Board and internal/external advisors, and flexibility to engage free market service providers outside the biotech park. Invite new venture-backed biotechs and CROs to move into the newly rebranded biotech park, incentivized with free lab space, discounted leases, access to subsidized research capabilities, or even unencumbered matching grants. Put some of the new spin-outs from their direct academic initiatives into the mix. But don’t put strings on those new externally-derived companies like the typical Pharma incubator; these will constrain the growth of these new companies. Focus this big initiative on one simple benefit: strategic proximity to a different culture.

His second big recommendation is "Get the rest of the company out of research's way". And by that, he especially means the commercial part of the organization:

One immediate solution would be to kick Commercial input out of decision-making in Research. Or, more practically, at least reduce it dramatically. Let them know that Research will hand them high quality post-PoC Phase 3-ready programs addressing important medical needs. Remove the market research gates and project NPV assessment models from critical decision-making points. Ignore the commercially-defined “in” vs “out” disease states that limit Research teams’ degrees of freedom. Let the science and medicine guide early program identification and progress. . .If you don’t trust the intellect of your Research leaders, then replace them. But second-guessing, micro-managing, and over-analyzing doesn’t aid in the exploration of innovation.

His last suggestion is to shake up the Board of Directors, and whatever Scientific Advisory Board the company has:

Too often Pharma defaults to not engaging the outside because “they know their programs best” or for fear of sharing confidential information that might leak to its competition. Reality is the latter is the least of their worries, and I’ve yet to hear this as being a source of profound competitive intelligence leakage. A far worse outcome is unchallenged “group think” about the merits (or demerits) of a program and its development strategy. Importantly, I’m not talking about specific Key Opinion Leader engagement on projects, as most Pharma companies do this effectively already. I’m referring to a senior, strategic, experienced advisory function from true practitioners in the field to help the R&D leadership team get a fresh perspective.

This is part of the "get some outside thinking" that is the thrust of his whole article. I can certainly see where he's coming from, and I think that this sort of thing might be exactly what some companies need. But what are the odds of (a) their realizing that and (b) anything substantial being done about it? I'm not all that optimistic - and, to be sure, Booth's article also mentions that some of these ideas might well be unworkable in practice.

I think that's because there's another effect that all of Bruce's recommendations have: they decrease the power and influence of upper management. Break up your R&D department, let in outside thinking, get your people to strike out pursuing their own ideas. . .all of those cut into the duties of Senior Executive Vice Presidents of Strategic Portfolio Planning, you know. Those are the sorts of people who will have to sign off on such changes, or who will have a chance to block them or slow their implementation. You'll have to sneak up on them, and there might not be enough time to do that in some of the more critical cases.

Another problem is what the investors would do if you tried some of the more radical ideas. As the last part of the post points out, we have a real problem in this business with our relationship with Wall Street. The sorts of people who want quarter-by-quarter earnings forecasts would absolutely freak if you told them that you were tearing the company up into a pile of biotechs. (And by that, I mean tearing it up for real, not created centers-of-innovation-excellence or whatever the latest re-org chart might call it). It's hard to think of a good way out of that one, too, for a large public company.

Now, there are people out there who have enough nerve and enough vision to try some things in this line, and once in a while you see it happen. But inertial forces are very strong indeed. With some organizations, it might be less work to just start over, rather than to spend all that effort tearing down the things you want to get rid of. For all I know, this is what (say) AstraZeneca has in mind with its shakeup and moving everyone to Cambridge. But what systems and attitudes are going to be packed up and moved over along with all the boxes of lab equipment?

Comments (39) + TrackBacks (0) | Category: