About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Toxicology | Who Discovers and Why | Why Everyone Loves Us »

July 2, 2015

Chris Viehbacher's Two Billion Dollars

Email This Entry

Posted by Derek

Chris Viehbacher, ex-Sanofi, has reappeared at a $2 billion dollar biotech fund.

Viehbacher is clear, though, that Gurnet will be founding companies as well as looking outside the red-hot fields like oncology. To find value these days, you have to look outside of the trendiest fields, he says. And you're also not going to find much in the way of innovation at huge companies like Sanofi.

"My conclusion is that you can't have truly disruptive thinking inside big organizations," says Viehbacher. "Everything about the way a big organization is designed is about eliminating disruption."

In Viehbacher's view, Big Pharma is still trying to act in the way the old movie studios once operated in Hollywood, with everyone from the stars to writers and stunt men all roped into one big group. Today, he says, movie studios move from project to project, and virtually everyone is a freelancer. In biopharma, he adds, value is found in specializing, and "fixed costs are your enemy."

He's right about that disruption problem at big companies, although he raised eyebrows when he said something similar while still employed at a big company. (Sanofi tried to put those comments in the ever-present "broader context" here). A large organization has its own momentum, but even if its magnitude is decent, its vector is pointed in the direction of keeping things the way that they are now. To be sure, that requires finding new drugs - it's a bit of a Red Queen's race in this business - but a lot of people would be fine if things just sort of rolled along without too many surprises or changes.

If that was ever a good fit for this industry, it isn't now. That makes it nerve-wracking to work in it, for sure, because if you feel that your job is really, truly safe then you're wrong. There are too many unpredictable events for that. I was involved in an interesting conversation the other day about investors in biopharma (and how passionately irrational some of the smaller ones can be), and we agreed that one reason for this is the large number of binary events: the clinical trial worked, or it didn't. The FDA approved your drug, or it didn't. You made your expected sales figures, or you didn't. And those are the expected ones, with dates on the calendar. There are plenty of what's-that-breaking-out-of-the-cloud-cover events, too. Trial stopped for efficacy! Trial stopped for tox! Early approval! Drug pulled from the market! It's like playing a board game with piles of real money (and with your career).

So Viehbacher's right on that point. But I part company with him on his earlier comments (basically, that if he was going to get anything innovative done at Sanofi, that he was going to have to go outside, because no one who wanted to innovate was working at a company like that in the first place). Even large companies have good people working at them - believe it or not! And some of them even have good ideas, too. But it can be harder for them to make headway in a large organization, he is right about that.

Comments (38) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

June 5, 2015

Artificial Intelligence For Biology?

Email This Entry

Posted by Derek

A new paper in PLoS Computational Biology is getting a lot of attention (which event, while not trying to be snarky about it, is not something that happens every day). Here's the press release, which I can guarantee that most of the articles written about this work will be based on. That's because the paper itself becomes heavy going after a bit - the authors (from Tufts) have applied machine learning to the various biochemical pathways involved in flatworm regeneration.

That in itself sounds somewhat interesting, but not likely to attract the attention of the newspapers. But here's the claim being made for it:

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria--the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years.

The "100 years" part is hyperbole, because it's not like people have been doing a detailed mechanistic search for that amount of time. Biology wasn't up to the job, as the earlier biologists well knew. But is the artificial intelligence part hyperbole, or not? As the many enzymes and other proteins involved in planarians have been worked out, it has definitely been a challenge to figure out what's doing what to what else for which reasons, and when. (That's the shortest description of pathway elucidation that I can come up with!) The questions about this work are (1) is the model proposed correct (or at least plausibly correct)? (2) Was it truly worked out by a computational process? And (3) does this process rise to the level of "artificial intelligence"?

We'll take those in order. I'm actually willing to stipulate the first point, pending the planarian people. There are a lot of researchers in the regeneration field who will be able to render a more meaningful opinion than mine, and I'll wait for them to weigh in. I can look at the proposed pathways and say things like "Yeah, beta-catenin would probably have to be involved, damn thing is everywhere. . .yeah, don't see how you can leave Wnt out of it. . ." and other such useful comments, but that doesn't help us much.

What about the second point? What the authors have done is apply evolutionary algorithms to a modeled version of the various pathways involved, and let it rip, rearranging and tweaking the orders and relationships until it recapitulates the experimental data. It is interesting that this process didn't spit out a wooly Ptolemaic scheme full of epicycles and special pleading, but rather a reasonably streamlined account of what could be going on. The former is always what you have to guard against with machine-learning systems - overfitting. You can make any model work if you're willing to accept sufficient wheels within wheels, but at some point you have to wonder if you're optimizing towards reality.

How close is the proposed scheme to what people already might have been thinking (or might have already proposed themselves?) In other words, did we need a ghost come from the grave to tell us this? I am not up on the planarian stem-cell literature, but my impression is that this new model really is more comprehensive than anything that's been proposed before. It provides testable hypotheses. For example, it interprets the results of some experiments as inferring the existence of (yet unknown) regulatory molecules and genes. (The authors present candidates for two of these, and I would guess that experimental evidence in this area will be coming soon).

It's also important to note, as the authors do, that this model is not comprehensive. It only takes into account 2-D morphology, and has nothing to say about (for example) the arrangement of planarian internal organs. This, though, seems to be a matter of degree, only - if you're willing to collect more data, code it up, and run the model for longer after doing some more coding on it, its successor should presumably be able to deal with this sort of thing.

And that brings us to point three: is this a discovery made via artificial intelligence? Here we get into the sticky swamp of defining intelligence, there to recognize the artificial variety. The arguments here have not ceased, and probably won't cease until an AI hosts its own late-night talk show. Is the Siri software artificial intelligence? Are the directions you get from Google Maps? A search done through the chemical literature on SciFinder or the like? An earlier age would have probably answered "yes" (and an even earlier age would have fled in terror) but we've become more used to this sort of thing.

I think that one big problem in this area is that the word "intelligence" is often taken (consciously or not) to mean "human intelligence". That doesn't have to be true, but it does move the argument to whether border collies or African grey parrots demonstrate intelligence. (Personally, I think they do, just at a lower level and in different ways than humans). Is Google Maps as smart, in its own field, as a border collie? As a hamster? As a fire ant, or a planarian? Tough question, and part of the toughness is that we expect intelligence to be able to handle more than one particular problem. Ants are very good at what they do, but they seem to me clearly to be bundles of algorithms, and is a computer program any different, fundamentally? (Is a border collie merely a larger bundle of more complex algorithms? Are we? I will defer discussion of this disturbing question, because I see no way to answer it).

One of the hardest parts of the work in this current paper, I think, was the formalization step, where the existing phenomena from the experimental literature were coded into a computable framework. Now that took intelligence. Designing all the experiments (decades worth) that went into this hopper took quite a bit of it, too. Banging through it all, though, to come up with a model that fit the data, tweaking and prodding and adjusting and starting all over when it didn't work - which is what the evolutionary algorithms did - takes something else: inhuman patience and focus. That's what computers are really good at, relentless grinding. I can't call it intelligence, and I can call it artificial intelligence only in the sense that an inflatable palm is an artificial tree. I realize that we do have to call it something, though, but the term "artificial intelligence" probably confuses more than it illuminates.

Comments (30) + TrackBacks (0) | Category: Biological News | In Silico | Who Discovers and Why

June 2, 2015

The Sunk Cost Fallacy

Email This Entry

Posted by Derek

Mentioning target validation yesterday led me to think about an even larger problem: the sunk cost fallacy. That's a general human tendency, but (like several other human tendencies) it can lead to some wasted scientific effort. A "sunk cost", in economic terms, is an unrecoverable one - it's gone, it's spent, and there's no way to mitigate that. The classic sunk cost is time. If you spend a year working on a project whose target turns out not to be relevant, you're not getting that year back. The moving finger writes, and having writ, moves on.

Of course, you're not getting back the money that was spent, either, unless you can find some way to create value out of what was done during that year. And that's the catch. Our human psychology is always ready to hold out the hope that something will occur to make it all worthwhile. Too many research projects (and investments, and relationships, and more besides) all slide down this same gentle slope into the pit. Back when I started in the industry, my workplace had a couple of programs that had been going on for years and years. No drugs had come out of them, not even clinical candidates. But they kept on going, because after all, we've come so far! Spent so much time! Surely there has to be something coming that will make good on all this effort?

Not in those cases, and not in a lot of others. An economist (or a behavioral psychologist) might tell you that the way to handle a sunk-cost situation is to ignore the amount of effort, time, and money that's been spent. As painful as it might be, those aren't really that relevant. Treat the project as if it's just dropped out of the sky in its current form and landed on your desk. Look at the situation as it is right now - do you want to go on, or not?

For a drug discovery effort, that means that you might rate it a bit higher than usual, because this "sudden new project" has a lot of SAR and a big collection of compounds attached to it. Those can be good things. On the other hand, if so many compounds (and so much SAR) has already been explored without a good candidate turning up, what is there left to do? That's also a notoriously dangerous question, because we medicinal chemists can always come up with more molecules to make and more things to try. But look these over - are they built on some sort of hypothesis put together from all the data, or are they just more rocks that haven't been turned over yet? If this thoroughly-worked-on project still doesn't have any clear direction to offer by this point, that's a strike against it.

What you don't want to do is factor in the time and money that have been spent, though. Those are sunk costs, and they have no bearing on what's happening now. What matters is the potential that the project might have in the future, and the chances of realizing it. I freely admit that this is a somewhat inhuman way of looking at things, but that gets back to a point I made in yesterday's post as well: not all human tendencies make for effective science. We have a lot of quirks in the way our brains work (confirmation bias, over-recognition of spurious patterns, faulty memory, loss aversion), and the importance that we attach to sunk costs is another one of those. A common response is "But you're saying that all this time and work has just been wasted!", to which the intelligent-alien reply could well be "Yep. And after looking all this over, the key thing is for us not to waste any more."

What makes this especially hard is that (like many faults) there's a virtue on the other side of it. Perseverance is its name, and without it, no drug would ever be discovered at all. Almost every research program in the history of pharmaceuticals has gone on for longer than people thought it would at first - sort of like a gigantic series of kitchen renovations - and that can't be ignored when you're thinking of killing one off. The only guidance is, I suppose, whether you seem to be getting anywhere, coming across any general trends or principles that give you some hope, or whether you're just turning over more rocks and hoping that the next one has a gold coin under it.

Comments (29) + TrackBacks (0) | Category: Drug Development | Who Discovers and Why

May 26, 2015

The Curse of Expertise

Email This Entry

Posted by Derek

David Sackett, epidemiologist and evidence-based medicine proponent, has died this week. I'd heard of him, but I hadn't seen his editorial about being an expert in one's field. Not all experts have had the thoughts that he had about their situation, and even fewer of those have acted on them the way he did:

. . .It then dawned on me that experts like me commit two sins that retard the advance of science and harm the young. Firstly, adding our prestige to our opinions gives the latter far greater persuasive power than they deserve on scientific grounds alone. Whether through deference, fear, or respect, others tend not to challenge them, and progress towards the truth is impaired in the presence of an expert. The second sin of expertness is committed on grant applications and manuscripts that challenge the current expert consensus. Reviewers face the unavoidable temptation to accept or reject new evidence and ideas, not on the basis of their scientific merit, but on the extent to which they agree or disagree with the public positions taken by experts on these matters. . .

. . .Is redemption possible for the sins of expertness? The only one I know that works requires the systematic retirement of experts. To be sure, many of them are sucked into chairs, deanships, vice presidencies, and other black holes in which they are unlikely to influence the progress of science or anything else for that matter. Surely a lot more people could retire from their fields and turn their intelligence, imagination, and methodological acumen to new problem areas where, having shed most of their prestige and with no prior personal pronouncements to defend, they could enjoy the liberty to argue new evidence and ideas on the latter's merits.

But there are still far more experts around than is healthy for the advancement of science. . .

Sackett started his expertise over more than once, but found that he kept becoming an expert again, no matter what. We need more people for whom that could possibly become a problem, and more people who would notice that it had become one.

Comments (28) + TrackBacks (0) | Category: Who Discovers and Why

May 19, 2015

Another Conservation Law

Email This Entry

Posted by Derek

As long as there's been organized scientific research - that is, more than one person working on a problem - there have been timeline disconnects. Something takes longer than expected, throwing everything off, usually. That's the basic disconnect, and there are ways to deal with it, but there's a larger one that I don't think that anyone's ever found a way to deal with.

That's the problem that larger discoveries have of coming infrequently and on no one's schedule at all. Scientists have been complaining about this for as long as anyone's tried to manage scientists. There's a conservation law at work here, I think: the harder the task you ask a team to accomplish, the less able you are to say when they'll accomplish it. Straightforward tasks can be planned out to the day. Harder ones can be roughly estimated by quarter. Really big ones. . .well, there's just no damn way of knowing.

There are several problems that follow this one around, probably wearing the same color shirts and the same brand of shoes. One of those is the way that progress on tough problems comes in irregular fits and starts. If you're budgeting for steady, regular accomplishments that can be listed every quarter, you're going to have a bad time. Long periods will go by without much concrete evidence that anything useful is happening. That's because the team has been trying things out that didn't work. Even worse, part of the time some of them may have been trying those things out mostly in their heads, trying to get a better handle on the problem. A run of negative results is (on first approximation) hard to distinguish from people just messing around, but a run of unproductive thinking is hard to distinguish from someone just staring out a window. It doesn't look so great come performance review time.

In the extreme cases, you get people like Claude Shannon, who did tremendous, revolutionary work near the beginning of his career, and is hardly remembered for anything in the years afterwards. (This story is told in many places, but William Poundstone's Fortune's Formula is a good place to find it. Shannon is an indelible figure in the history of science, but he would have had some pretty rough quarterly progress reports to turn in.

What to do about this? The only advice I have is to keep that relationship above in mind, difficulty versus predictability. If you want someone (or some group) to aim high, be prepared for that uncertainty principle to kick in. It's not possible just to leave everyone alone forever, but checking in enough to see that real thought and effort is being expended is probably all that a manager can do. Not every organization is going to be open to that.

Comments (8) + TrackBacks (0) | Category: Who Discovers and Why

May 5, 2015

Peter Thiel's Book

Email This Entry

Posted by Derek

Wavefunction has a good look at Peter Thiel's Zero to One. As he puts it, "Thiel has said some odd things about chemistry and biotech before, so I was bracing myself for encountering some naiveté in his book." I don't blame him; I'd be the same way. But it wasn't quite as bad as he feared.

Nevertheless. . .there is a grain of truth in Thiel's diagnosis of many biotech and pharma companies. For some reason the pharmaceutical industry has lost the kind of frontier spirit that once infused it and which is now largely the province of swashbuckling Silicon Valley inhabitants. Whatever the hurdles and naiveté intrinsic to this spirit, it doesn't seem unreasonable to imagine that the industry could benefit from a bit more can-do, put-all-your-chips-on-the-table, entrepreneurial kind of spirit.

Still, you'll need to be ready for the phrase "high-salaried, unaligned lab drones" - just warning you. Another part of the blog post mentions a good reason for the more cautious approach that you see in biopharma as opposed to software, though: higher chances of failure via factors outside of your control. That gets back to the humans-didn't-make-this argument that I make in this situations - you really do have a better chance of bulling your way through in an IT startup by sheer skill and hard work. Whereas in drug discovery, skill and hard work are necessary, but nowhere near sufficient. We get our heads handed to us more often, and for reasons that couldn't always be anticipated by a reasonable person.

That's why the avoidable errors are so annoying in this business. Our failure rates are high enough already without own goals!

Comments (21) + TrackBacks (0) | Category: Who Discovers and Why

April 24, 2015

What Are the Odds of Finding a Drug (And How Do You Stand Them?)

Email This Entry

Posted by Derek

Lisa Jarvis of C&E News asked a question on Twitter that's worth some back-of-the-envelope calculation: what are the odds of a medicinal chemist discovering a drug during his or her career? And (I checked) she means "personally synthesizing the compound that makes it to market". My own hand-waving guesstimate of an upper bound starts with an assumption of around 10,000 people trying to do this, worldwide (which is surely on the high side - see below).

Now, if you start work at 25 (I'm counting master's degrees in there) and go to 65, you've got 40 years of career, but (1) not all of that, as time goes on, is going to be spent full-time cranking away in the lab, in most cases, and (2) God knows that there aren't nearly as many solid 40-year careers in this gig as there used to be. A more realistic count, and still on the high side, might be 25 years. Now, over that 25-year span, how many small molecule drugs are there for a medicinal chemist to score with? A generous count of 20 per year (see here, and note that in the last 20 years you'll need to subtract antibodies/biologics) would give 500 drugs discovered and sent to market during that time, so with the same 10,000 people working over that span, that would give you rough odds of 5%, one in twenty. That is surely an upper bound, by a very substantial amount.

That's because it's not the same cohort of people during that time, of course, so the odds are going to lengthen because of that. The real number of people will be smaller than 10,000, on the average, and the years of lab career will be shorter than 25. It's harder to assign solid numbers at this point, but my own impression is that the real odds are 1% or less. When I think back over my own career, the number of new small-molecule drugs that have come out of the shops I've worked in can be counted easily on my fingers, and I've worked around a lot of medicinal chemists during that span.

Now, this brings up another familiar subject, which comes up whenever I discuss the above topic with anyone outside the whole field of scientific research. "How can you stand that" is not an unusual question. If 99% of the patients a doctor saw were not helped by their medical care, that would be a discouraging way to make a living, for sure. But there are differences, important ones. For one thing, this is science, after all. Even when we find out that something doesn't work, we've found something. I'd rather make a drug that works, but many of the projects I've worked on have added to medical knowledge even when they didn't put a drug on the market. I can tell you, most definitely, that a selective m2 muscarinic antagonist is not going to help Alzheimer's much, nor will a D1 antagonist do much for schizophrenia. Similarly, an inhibitor of hormone-sensitive lipase is not an appropriate therapy for type II diabetes, and you will want to be very careful if you want to take a mixed PPAR ligand on for patients with metabolic syndrome, because they don't all do what you'd expect. And so on. A lot of people got to find out that last one, across several companies and in all sorts of interesting and unusual ways, but I have to say, in those other three examples, my colleagues and I were pretty much up at the front lines, and came up with some of the best compounds you could want (and some of the best ever seen for those targets). And they didn't work, for the usual reasons: failure to understand the disease well enough, failure when hit by toxicity through other mechanisms.

But the only way to find those things out was to make such compounds. So yeah, to invoke the cliché, I've pushed back human knowledge in those areas (and a number of others besides). The projects I'm working on right now are long odds, too, but I have reason to believe that my colleagues and I are again at the very edge of what's known in these areas, right up on the foaming front of the breaking wave. That's where I've always wanted to be. These are important problems, extremely relevant to human disease (as you'd imagine, since a drug company is willing to spend its money on them even though they're very hard indeed). Just getting the chance to work up at that level, to know that no one's ever put a foot down where the next step is going to go, is what does it for me.

Wavefunction has some good thoughts on this question here.

Comments (38) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Life in the Drug Labs | Who Discovers and Why

January 21, 2015

The State of US Medical Research

Email This Entry

Posted by Derek

Here's a look at the state of medical research in the US versus other developed countries (open-access article at JAMA).
Some things to note from that chart: (1) research funding has been pretty flat the last few years, with the only exception being the stimulus-package burst of cash. (2) The share of the total put up by biotechnology companies seems to have gone up a bit over the twenty-year span. (3) The money spent by industry is now up to 58% of the total in the US, and has been increasing over time. That's partly due to increased spending by industry, and partly due to lower-than-historical increases in spending by government sources. One thing to note is that these numbers have been inflation-adjusted, but by the Biomedical R&D Price Index, not the CPI (see the comments to this post for more on this).

Here comes a section with some interesting numbers, derived from PhRMA annual overviews:

The distribution of investments across the types of medical research changed from 2004 to 2011. Pharmaceutical companies shifted funding to late-phase clinical trials and away from discovery activity such as target identification and validation. The share of pharmaceutical industry funding (including that by US companies outside of the United States) spent on phase 3 trials increased by 36% (5%/year growth rate) from 2004 to 2011 (Figure 4), and the share of investment in prehuman/preclinical activities decreased by 4% (2%/year average decline). This shift toward clinical research and development reflects the increasing costs, complexity, and length of clinical trials but may also reflect a deemphasis of early discovery efforts by the US pharmaceutical industry. While industry has shifted funding to clinical trials, the share of NIH contributions dedicated to basic science and clinical research was unchanged (eTable 2 in the Supplement), with the majority of funds still focused on basic research. These data may not accurately reflect the true division of NIH investment for basic science vs disease-focused research, as a growing proportion of NIH expenditures is for projects having potential clinical application in many diseases or organ systems.

I wonder if some of the shift has been away from what gets defined as "pharmaceutical companies" and toward what gets defined as "biotechnology companies". A lot of smaller outfits are not members of PhRMA, of course, and I think that early-stage research has been heading towards their end of the industry. As for those small companies, here's a look at venture funding across this period:

In real terms, venture capital investment in biotechnology companies steadily increased from $1.5 billion in 1995 to a peak of $7.0 billion in 2007 (eFigure 3 in the Supplement). During that period, investment in biotechnology companies as a share of total venture capital investment increased from 10% to 18%, and the number of investments increased from 176 to 538. Investment levels and the number of transactions of biotechnology decreased following the financial crisis in 2008-2009, declining to a low of $4.3 billion in 2009. Venture capital investment still has not recovered to its pre-2008 levels, with only $4.5 billion invested in 2013. Size of investment per transaction (median, $11 million, inflation adjusted) has remained unchanged for 2 decades.

I wonder if the current boom times in biopharma startups are changing these numbers (the JAMA article only goes up to 2011). We'll have to take a look at these figures again in a couple of years and see if that's happened (my guess is that it has). Interestingly, the paper goes on to talk about funding levels by disease, versus disease burden in the US. Cancer and HIV are funded at well above the levels that this measure would predict, but (as the study notes) there are many other factors in play (scientific opportunity, for one). Underfunded, by this measure, are migraine and COPD.

The comparisons to worldwide research funding then come up. The US share has been declining as Asia ramps up, but Asia was ramping up from a very small percentage twenty years ago. As it stands, the US is still the source of 44% of the world's medical research funds, with Europe at another 33%. In terms of single countries, the US is still by far the largest contributor. When you look at the number of people doing the work, China comes out in numbers, but is quite low in percentage of the population so engaged (and this isn't the only area where they're an outlier in this fashion!)

I'll discuss patent and publication data in another post; there's enough to talk about at more length there. Overall, the authors of this paper conclude that the US, while still leading in most categories, has been standing relatively still or slipping a bit during the period reviewed:

Medical research in the United States remains the primary source of new discoveries, drugs, devices, and clinical procedures for the world, although the US lead in these categories is declining. For example, whereas the United States funded 57% of medical research in 2004, in 2011 that had declined to 44%. Basic research and product development are central to the health of countries’ economies. However, changes in the pattern of investment, particularly level funding by US government and foundation sponsors, with a decline in real terms, combined with companies’ focus on late-stage products (with diminished discovery-level investment) indicate that difficulties may soon appear in the ability of clinicians to fully realize the value of past investments in basic biology.

My hope is that this has turned around somewhat in the last two or three years. There has been a notable upswing in small company formation and funding, from what I can see, and many of them are jumping on some of that basic biology mentioned above (chimeric antigen receptor-based therapy in cancer, for example, which is one of the hottest biopharma investment areas going right now). So this could be a snapshot taken at the gloomiest point (I hope so), or it could be picking up on a longterm trend that's continuing despite any recent new. I opt for the former, but I'm an optimistic person.

Comments (10) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Industry History | Who Discovers and Why

September 19, 2014

Peter Thiel's Uncomplimentary Views of Big Pharma

Email This Entry

Posted by Derek

See what you think of Peter Thiel's characterization of the drug industry in this piece for Technology Review. Thiel's a very intelligent guy, and his larger points about technology stalling out make uncomfortable reading, in the best sense. (The famous quote is "We wanted flying cars; instead we got 140 characters"). But take a look at this (emphasis added):

You have to think of companies like Microsoft or Oracle or Hewlett-Packard as fundamentally bets against technology. They keep throwing off profits as long as nothing changes. Microsoft was a technology company in the ’80s and ’90s; in this decade you invest because you’re betting on the world not changing. Pharma companies are bets against innovation because they’re mostly just figuring out ways to extend the lifetime of patents and block small companies. All these companies that start as technological companies become antitechnological in character. Whether the world changes or not might vary from company to company, but if it turns out that these antitechnology companies are going to be good investments, that’s quite bad for our society.

I'd be interested in hearing him revise and extend those remarks, as they say in Washington. My initial reaction was to sit down and write an angry refutation, but I'm having second thoughts. The point about larger companies becoming more cautious is certainly true, and I've complained here about drug companies turning to M&A and share buybacks instead of putting that money back into research. I'd say, though, that the big drug companies aren't so much anti-technology as they are indifferent to it (or as indifferent as they can afford to be).

Even that still sounds harsh - what I mean is that they'd much rather maximize what they have, as opposed to coming up with something else. Line extensions and patent strategies are the most obvious forms of this. Buying someone else's innovations comes next, because it still avoids the pain and uncertainty of coming up with your own. There's no big drug company that does only these things, but they all do them to some degree. Share buybacks are probably the most galling form of this, because that's money that could, in theory, be applied directly to R&D, but is instead being used to prop up the share price.

But Thiel mentions elsewhere in his interview that we could, for example, be finding cures for Alzheimer's, and we're not. Eli Lilly, though, is coming close to betting the company on the disease, taking one huge swing after another at it. Thiel's larger point stands, about how more of the money that's going into making newer, splashier ways to exchange cat pictures and one-liners over the mobile phone networks could perhaps be applied better (to Alzheimer's and other things). But it's not that the industry hasn't been beating away on these itself.

I worry that the Andy Grove fallacy might be making an appearance again, given Thiel's background (PayPal, Facebook, LinkedIn). That link has a lot more on that idea, but briefly, it's the tendency for some people from the computing/IT end of the tech world to ask what the problem is with biomedical research, because it doesn't improve like computing hardware does. It's a good day to reference the "No True Scotsman" fallacy, too: sometimes people seem to identify "technology" with computing, and if something doesn't double in speed and halve in cost every time you turn around, well, that's not "real" technology. At the very least, it's not living up to its potential, and there must be something wrong with it.

I also worry that Thiel adduces the Manhattan project, the interstate highway system, and the Apollo program as examples of the sort of thing he'd like to see more of. Not that I have anything against any of those - it's just that they're all engineering projects, rather than discovery ones. The interstate system, especially: we know how to build roads, so build bigger ones. The big leap there was the idea that we needed large, standardized ones across the whole country, with limited entrances and exits. (And that was born out of Eisenhower's experiences driving across the country as the road network formed, and seeing Germany's autobahns during the war).

But you can say similar things about Apollo: we know that rockets can exist, so build bigger ones that can take people to the moon and back. There were a huge number of challenges along the way, in concept, design, and execution, but the problem was fundamentally different than, say, curing Alzheimer's. We don't even know that Alzheimer's can be cured - we're just assuming that it can. I really tend to think it can be cured, myself, but since we don't even know what causes it, that's a bit of a leap of faith. We're still making fundamental "who knew?" type discoveries in biochemistry and molecular biology, of the sort that would totally derail most big engineering projects. The Manhattan project is the closest analog of the three mentioned, I'd say, because atomic physics was such a new field (and Oppenheimer had to make some massive changes in direction along the way because of that). But I've long felt that the Manhattan project is a poor model, since it's difficult to reproduce its "Throw unlimited amounts of money and talent at the problem" mode, not to mention the fight-for-the-survival-of-your-civilization aspect.

But all that said, I do have to congratulate Peter Thiel on putting his money down on his ideas, though his investment fund. One of things I'm happiest about in today's economy, actually, is the way that some of the internet billionaires are spending their money. Overall, I'd say that many of them agree with Thiel that we haven't discovered a lot of things that we could have, and they're trying to jump-start that. Good luck to them, and to us.

Comments (58) + TrackBacks (0) | Category: Business and Markets | General Scientific News | Who Discovers and Why

February 20, 2014

The NIH Takes a Look At How the Money's Spent

Email This Entry

Posted by Derek

The NIH is starting to wonder what bang-for-the-buck it gets for its grant money. That's a tricky question at best - some research takes a while to make an impact, and the way that discoveries can interact is hard to predict. And how do you measure impact, by the way? These are all worthy questions, but here's apparently the way things are being approached:

Michael Lauer's job at the National Institutes of Health (NIH) is to fund the best cardiology research and to disseminate the results rapidly to other scientists, physicians, and the public. But NIH's peer-review system, which relies on an army of unpaid volunteer scientists to prioritize grant proposals, may be making it harder to achieve that goal. Two recent studies by Lauer, who heads the Division of Cardiovascular Sciences at NIH's National Heart, Lung, and Blood Institute (NHLBI) in Bethesda, Maryland, raise some disturbing questions about a system used to distribute billions of dollars of federal funds each year.

(MiahcalLauer recently analyzed the citation record of papers generated by nearly 1500 grants awarded by NHLBI to individual investigators between 2001 and 2008. He was shocked by the results, which appeared online last month in Circulation Research: The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores. That was the case even though low-scoring researchers had been given less money than their top-rated peers.

I understand that citations and publications are measurable, while most other ways to gauge importance aren't. But that doesn't mean that they're any good, and I worry that the system is biased enough already towards making these the coin of the realm. This sort of thing worries me, too:

Still, (Richard) Nakamura is always looking for fresh ways to assess the performance of study sections. At the December meeting of the CSR advisory council, for example, he and Tabak described one recent attempt that examined citation rates of publications generated from research funded by each panel. Those panels with rates higher than the norm—represented by the impact factor of the leading journal in that field—were labeled "hot," while panels with low scores were labeled "cold."

"If it's true that hotter science is that which beats the journals' impact factors, then you could distribute more money to the hot committees than the cold committees," Nakamura explains. "But that's only if you believe that. Major corporations have tried to predict what type of science will yield strong results—and we're all still waiting for IBM to create a machine that can do research with the highest payoff," he adds with tongue in cheek.

"I still believe that scientists ultimately beat metrics or machines. But there are serious challenges to that position. And the question is how to do the research that will show one approach is better than another."

I'm glad that he doesn't seem to be taking this approach completely seriously, but others may. If only impact factors and citation rates were real things that advanced human knowledge, instead of games played by publishers and authors!

Comments (34) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

February 11, 2014

Drug Discovery in India

Email This Entry

Posted by Derek

Molecular biologist Swapnika Ramu, a reader from India, sends along a worthwhile (and tough) question. She says that after her PhD (done in the US), her return to India has made her "less than optimistic" about the current state of drug discovery there. (Links in the below quote have been added by me, not her:

Firstly, there isn't much by way of new drug development in India. Secondly, as you have discussed many times on your blog. . .drug pricing in India remains highly contentious, especially with the recent patent disputes. Much of the public discourse descends into anti-big pharma rhetoric, and there is little to no reasoned debate about how such issues should be resolved. . .

I would like to hear your opinion on what model of drug discovery you think a developing nation like India should adopt, given the constraints of finance and a limited talent pool. Target-based drug discovery was the approach that my previous company adopted, and not surprisingly this turned out to be a very expensive strategy that ultimately offered very limited success. Clearly, India cannot keep depending upon Western pharma companies to do all the heavy lifting when it comes to developing new drugs, simply to produce generic versions for the Indian public. The fact that several patents are being challenged in Indian courts would make pharma skittish about the Indian market, which is even more of a concern if we do not have a strong drug discovery ecosystem of our own. Since there isn't a robust VC-based funding mechanism, what do you think would be a good approach to spurring innovative drug discovery in the Indian context?

Well, that is a hard one. My own opinion is that India only has a limited talent pool as compared to Western Europe or the US - the country still has a lot more trained chemists and biologists than most other places. It's true, though, that the numbers don't tell the story very well. The best people from India are very, very good, but there are (from what I can see) a lot of poorly trained ones with degrees that seem (at least to me) worth very little. Still, you've still got a really substantial number of real scientists, and I've no doubt that India could have several discovery-driven drug companies if the financing were easier to come by (and the IP situation a bit less murky - those two factors are surely related). Whether it would have those, or even should, is another question.

As has been clear for a while, the Big Pharma model has its problems. Several players are in danger of falling out of the ranks (Lilly, AstraZeneca), and I don't really see anyone rising up to replace them. The companies that have grown to that size in the last thirty years mostly seem to be biotech-driven (Amgen, Biogen, Genentech as was, etc.)

So is that the answer? Should Indian companies try to work more in that direction than in small molecule drugs? Problem is, the barriers to entry in biotech-derived drugs are higher, and that strategy perhaps plays less to the country's traditional strengths in chemistry. But in the same way that even less-developed countries are trying to skip over the landline era of telephones and go straight to wireless, maybe India should try skipping over small molecules. I do hate to write that, but it's not a completely crazy suggestion.

But biomolecule or small organic, to get a lot of small companies going in India (and you would need a lot, given the odds) you would need a VC culture, which isn't there yet. The alternative (and it's doubtless a real temptation for some officials) would be for the government to get involved to try to start something, but I would have very low hopes for that, especially given the well-known inefficiencies of the Indian bureaucracy.

Overall, I'm not sure if there's a way for most countries not to rely on foreign companies for most (or all) of the new drugs that come along. Honestly, the US is the only country in the world that might be able to get along with only its own home-discovered pharmacopeia, and it would still be a terrible strain to lose the European (and Japanese) discoveries. Even the likes of Japan, Switzerland, and Germany use, for the most part, drugs that were discovered outside their own countries.

And in the bigger picture, we might be looking at a good old Adam Smith-style case of comparative advantage. It sure isn't cheap to discover a new drug in Boston, San Francisco, Basel, etc., but compared to the expense of getting pharma research in Hyderabad up to speed, maybe it's not quite as bad as it looks. In the longer term, I think that India, China, and a few other countries will end up with more totally R&D-driven biomedical research companies of their own, because the opportunities are still coming along, discoveries are still being made, and there are entrepreneurial types who may well feel like taking their chances on them. But it could take a long longer than some people would like, particularly researchers (like Swapnika Ramu) who are there right now. The best hope I can offer is that Indian entrepreneurs should keep their eyes out for technologies and markets that are new enough (and unexplored enough) so that they're competing on a more level playing field. Trying to build your own Pfizer is a bad idea - heck, the people who built Pfizer seem to be experiencing buyer's remorse themselves.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

December 16, 2013

NIH Taking on More RIsk?

Email This Entry

Posted by Derek

You'd have to think that this is at least a step in the right direction: "NIH to experiment with high-risk grants":

On 5 December, agency director Francis Collins told an advisory committee that the NIH should consider supporting more individual researchers, as opposed to research proposals as it does now — an idea inspired in part by the success of the high-stakes Pioneer awards handed out by the NIH's Common Fund.

“It’s time to look at balancing our portfolio,” says Collins, who plans to pitch the idea to NIH institute directors at a meeting on 6 January.

The NIH currently spends less than 5% of its US$30-billion budget on grants for individual researchers, including the annual Pioneer awards, which give seven people an average of $500,000 a year for five years. In contrast, the NIH’s most popular grant, the R01, typically awards researchers $250,000 per year for 3‒5 years, and requires a large amount of preliminary data to support grant applications.

They're not going to get rid of the R01 grant any time soon, but what Collins is talking about here is getting a bit more like the Howard Hughes funding model (HHMI grants run for five years as well, and tend to be awarded more towards the PI than towards the stated projects). One problem is that the NIH is evaluating the success of the Pioneer grants by noting that the awardees publish more highly-cited papers, and that may or may not be a good measure:

But critics say that there is little, if any, evidence that this approach is superior. “'People versus projects’ is the HHMI bumper sticker, but it’s a misreading of what makes the HHMI great,” says Pierre Azoulay, an economist at the Massachusetts Institute of Technology in Cambridge. Azoulay suggests that the findings of the NIH’s 2012 report may actually reflect factors such as the impact of the HHMI’s unusually lengthy funding windows, which allow a lot of time for innovation. In contrast, he says, “the Pioneer grants are freedom with an expiration date”.

Daniel Sarewitz, co-director of the Consortium for Science, Policy and Outcomes at Arizona State University in Tempe, adds that funding individual researchers may well increase the number of publications they produce. “But that may or may not have anything to do with enhancing the NIH's potential to contribute to actual health outcomes”, such as translation of research into the clinic, he says.

I'd like for them to set aside some money for ideas that have a low chance of working, but which would be big news if they actually came through. The high-risk high-reward stuff would also have to be awarded more by evaluating the people involved, since none of them would look likely in the "Tell us exactly what results you expect" mode of grant funding. But I can say this, not having to the be the person who wades through the stacks of applications - sorting those out would probably be pretty painful.

Comments (9) + TrackBacks (0) | Category: Who Discovers and Why

December 10, 2013

Standards of Proof

Email This Entry

Posted by Derek

Here are some slides from Anthony Nicholls of OpenEye, from his recent presentation here in Cambridge on his problems with molecular dynamics calcuations. Here's his cri du coeur (note: fixed a French typo from the original post there):

. . .as a technique MD has many attractive attributes that have nothing to do with its actual predictive capabilities (it makes great movies, it’s “Physics”, calculations take a long time, it takes skill to do right, “important” people develop it, etc). As I repeatedly mentioned in the talk, I would love MD to be a reliable tool - many of the things modelers try to do would become much easier. I just see little objective, scientific evidence for this as yet. In particular, it bothers me that MD is not held to the same standards of proof that many simpler, empirical approaches are - and this can’t be good for the field or MD.

I suspect he'd agree with the general principle that while most things that are worthwhile are hard, not everything that's hard is worthwhile. His slides are definitely fun to read, and worthwhile even if you don't give a hoot about molecular dynamics. The errors he's warning about apply to all fields of science. For example,he starts off with the definition of cognitive dissonance from Wikipedia, and proposes that a lot of the behavior you see in the molecular dynamics field fits the definitions of how people deal with this. He also maintains that the field seems to spend too much of its time justifying data retrospectively, and that this isn't a good sign.

I especially enjoyed his section on the "Tanimoto of Truth". That's comparing reality to experimental results. You have the cases where there should have been a result and the experiment showed it, and there shouldn't have been one, and the experiment reproduced that, too : great! But there are many more cases where only that first part applies, or gets published (heads I win, tails just didn't happen). And you have the inverse of that, where there was nothing, in reality, but your experiment told you that there was something. These false positives get stuck in the drawer, and no one hears about them at all. The next case, the false negatives, often end up in the "parameterize until publishable" category (as Nicholls puts it), or they get buried as well. The last category (should have been negative, experiment says they're negative) are considered so routine and boring that no one talks about them at all, although logically they're quite important.

All this can impart a heavy, heavy publication bias: you only hear about the stuff that worked, even if some of the examples you hear about really didn't. And unless you do a lot of runs yourself, you don't usually have a chance to see how robust the system really is, because the data you'd need aren't available. The organic synthesis equivalent is when you read one of those papers that do, in fact, work on the compounds in Table 1, but hardly any others. And you have to play close attention to Table 1 to realize that you know, there aren't any basic amines on that list (or esters, or amides, or what have you), are there?

The rest of the slides get into the details of molecular dynamic simulations, but he has some interesting comments on the paper I blogged about here, on modeling of allosteric muscarinic ligands. Nicholls says that "There are things to admire about this paper- chiefly that a prospective test seems to have been done, although not by the Shaw group." That caught my eye as well; it's quite unusual to see that, although it shouldn't be. But he goes on to say that ". . .if you are a little more skeptical it is easy to ask what has really been done here. In their (vast) supplementary material they admit that GLIDE docking results agree with mutagenesis as well (only, “not quite as well’, whatever that means- no quantification, of course). There’s no sense, with this data, of whether there are mutagenesis results NOT concordant with the simulations." And that gets back to his Tanimoto of Truth argument, which is a valid one.

He also points out that the predictions ended up being used to make one compound, which is not a very robust standard of proof. The reason, says Nicholls, is that molecular dynamics papers are held to a lower standard, and that's doing the field no good.

Comments (9) + TrackBacks (0) | Category: In Silico | Who Discovers and Why

November 7, 2013

Organizing Research

Email This Entry

Posted by Derek

Here's an article in Angewandte Chemie that could probably have been published in several other places, since it's not specifically about chemistry. It's titled "The Organization of Innovation - The History of an Obsession", from Caspar Hirschi at St. Gallen in Switzerland, and it's a look at how both industrial and academic research have been structured over the years.

He starts off with an article fromThe Economist on the apparent slowdown in innovation. This idea has attained wider currency in recent years (Tyler Cowen's The Great Stagnation is an excellent place to start, although it's not just about innovation). I should note that the Economist article does not buy into this theory. Hirschi dissents, too, but from another direction:

Despite what the authors would have us believe, the “innovation blues” lamented in the The Economist have little to do with the current course of technological development. The source of the perceived problem arises instead from a sense of disappointment over the fact that their innovation theory does not hold up to its empirical promise. The theory is made up of a chain of causation that sees science as the most important driving force behind innovation, and innovation as the most important driving force for the economy, and an organizational principle maintaining that the three links in the chain function most efficiently under market-oriented com- petition.

The problem with this theory is that only the first part of its causal chain stands up to empirical scrutiny. There is every indication that progress in scientific knowledge leads to technical innovations, but it is highly unlikely that a higher level of innovation results in greater prosperity. . .

He then goes back to look at the period from 1920 to 1960, which is held up by many who write about this subject as a much more fruitful time for innovation. Here's the main theme:

A comparison between these previously used formulas with those used today strongly suggests that our greatest stumbling block to innovation is our theory-based obsession with innovation itself. This obsession has made scientists and technical experts play by rules stipulating that they can deliver outstanding results only if they are exposed to the competitive forces of a market; and if no such market exists— as in the case of government research funding—a market has to be simulated. Before 1960, the organization of innovation had hewed to the diametrically opposite principle. . .

I have a couple of problems with this analysis. For one, I think that parts of the 19th century were also wildly productive of innovation, and that era was famously market-driven. Another difficulty is that when you're looking at 1920-1960, there's the little matter of World War II right in the middle of it. The war vastly accelerated technological progress in numerous areas (aeronautics, rocketry, computer hardware and software, information science and cryptography, radar and other microwave applications, atomic physics, and many more). The conditions were very unusual: people were, in many cases, given piles of money and other resources, with the understanding that the continued existence of their countries and their own lives could very well be at stake. No HR department could come up with a motivational plan like that - at least, I hope not.

Hirschi surveys the period, though, with less emphasis on all this, but it does come up in his discussion of Kenneth Mees of Eastman Kodak, who was a very influential thinker on industrial research. It was his theory of how it should be run that led to insitutions like Bell Labs:

The key to success of an industrial laboratory, he explained, lay in the ability of its directors to recreate the organizational advantages of a university in a commercial setting. Industrial scientists ought to be given the greatest possible latitude to conduct their research as they see fit, with less outside interference, flat hierarchies within the institution, and department heads who are themselves scientists. Like professors at universities, he contended, the senior scientific staff should hold permanent appointments, and all scientists ought to have the opportunity to publish their research results.

Readers here will be reminded of the old "Central Research" departments of companies like DuPont, Bayer, Ciba and others. These were set up very much along these lines, to do "blue sky" work that might lead to practical applications down the road. It's absolutely true that the past thirty years has seen most of this sort of thing disappear from the world, and it's very tempting to assign any technological slowdown to that very change. But you always have to look out for the post hoc ergo propter hoc fallacy: it's also possible that a slowdown already under way led to the cutbacks in less-directed research. Here's more on Mees:

For Mees, industrial research was a “gamble”, and could not be conducted according to the rules of “efficiency engineering”. Research, he insisted, requires a great abundance of staff members, ideas, money, and time. Anyone who is unwilling to wait ten years or more for the first results to emerge has no business setting up a laboratory in the first place. Mees established the following rule for the organization of scientific work: “The kinds of research which can be best planned are found to be those which are least fundamental.” But because Mees regarded the basic sciences as the most important source of innovation, he advised research directors to try not to rein in their scientists with assignments, but instead to inspire them with questions.

I don't find a lot to argue with in that sort of thinking, but that might be because I like it (which doesn't necessarily mean that it's true). I hope it is, and I would rather live in a world where it is, but things don't have to be that way. I do think, though, very strongly, that the application of what's called "efficiency engineering" to R&D is a recipe for disaster. (See here, here, here, here, and here for more). And there are people in high places who apparently agree.

The Ang. Chem. article goes on to note, correctly, that many of these big industrial research operations were funded by monopoly (or near monopoly) profits. AT&T, IBM, Eastman Kodak and others began to use their research arms as public relations tools to argue for that status quo to continue.

Because monopolies could not be justified directly, the only route the companies in question had open to them was a detour that required more and more elaborate displays of their capacity for innovation. Once again, architecture proved to be well suited for this purpose. In the late 1950s and early 1960s, several American industrial groups built new research centers. They opted to locate them in isolated surroundings in the style of a modern university campus, and favored a new architectural style that moved away from the traditional laboratory complexes based on classic industrial buildings such as the one in Murray Hill. . .

The architecture critics of the time soon came up with an apt name for these buildings: “Industrial Versailles”. The term was fitting because the new research centers were to industrial innovation what Versailles had been to the Sun King: complexes of representation, as the historians of technology Scott Knowles and Stuart Leslie have detailed.

We actually owe a lot of our current ideas about research building design to the thoughts about what made Bell Labs so productive in the 1950s - as you keep digging, you keep finding the same roots. Even if those theories were correct, whether the later, showier buildings were true to them is open for debate.

Hirshci finishes up his piece with the 1957 Sputnik launch, which famously had a huge effect on academic science funding in the US. I only realized when I was in my 20s that my whole impression of the science facilities in my own middle and high school in Arkansas were shaped by that event. I'd sort of assumed that things like this were just always ten or twenty years old, but that was because I was seeing the aftereffects of that wave of funding, which reached all the way to the Mississippi Delta. Here's Hirschi on the effects in higher education and beyond:

The explosion of government research funding resulted in serious quandaries about how best to allocate these funds. There were countless research institutes, and there was a need for clear rationales as to which institutions and individuals would be entitled to how many dollars that came from taxes. An attempt was made to meet this challenge by introducing an element of marketlike competition. Artificial competition for project-related subsidies, to be regulated and controlled by the funding agencies, was set in motion. Successful proposals needed to provide precise details about the scope of each project and a set time frame was assigned for the completion of a given project, which made it necessary for grant seekers to package fundamental research as though it were application-oriented. This set-up ushered in a period in which innovations were proclaimed well before the fact, and talked up as monumental breakthroughs in the quest to secure funding. Representation became integral to production.

It did not take long for this new regime to have drastic reverberations for industrial research. The flood of money that inundated the research universities heightened the incentive for industrial groups to outsource costly laboratory work to universities or public research centers. At the same time, they were inspired by the public administration's belief in the rules of the market to pay heed in their own research divisions to the credo that the innovative impulse requires the intensity of a competitive situation. In the long run, the private sector did its part in making the new form of market-oriented project research the only accepted organizational principle.

To my eye, though, this whole article wraps up rather quickly like this. It seems like a reasonable short history of research organization in the mid-20th century, followed by several assertions. Hirschi's not advocating a return to the 1950s (he explicitly states this), but it's hard to say what he is advocating, other than somehow getting rid of some of what he seems to feel is unseemly competition and market-driven stuff. "The solution can only lie in the future" is a line from the last paragraph, and I hope it reads better in German.

Comments (19) + TrackBacks (0) | Category: Who Discovers and Why

October 17, 2013

Creativity Training For Creative Creators

Email This Entry

Posted by Derek

Here's a bilous broadside against the whole "creativity" business - the books, courses, and workshops that will tell you how to unleash the creative powers within your innards and those of your company:

And yet the troubled writer also knew that there had been, over these same years, fantastic growth in our creativity promoting sector. There were TED talks on how to be a creative person. There were “Innovation Jams” at which IBM employees brainstormed collectively over a global hookup, and “Thinking Out of the Box” desktop sculptures for sale at Sam’s Club. There were creativity consultants you could hire, and cities that had spent billions reworking neighborhoods into arts-friendly districts where rule-bending whimsicality was a thing to be celebrated. If you listened to certain people, creativity was the story of our time, from the halls of MIT to the incubators of Silicon Valley.

The literature on the subject was vast. Its authors included management gurus, forever exhorting us to slay the conventional; urban theorists, with their celebrations of zesty togetherness; pop psychologists, giving the world step-by-step instructions on how to unleash the inner Miles Davis. Most prominent, perhaps, were the science writers, with their endless tales of creative success and their dissection of the brains that made it all possible.

I share his skepticism, although the author (Thomas Frank) comes at the whole question from a left-wing political perspective, which is rather far from my own. I think he's correct that many of the books, etc., on this topic have the aim of flattering their readers and reinforcing their own self-images. And I also have grave doubts about the extent to which creativity can be taught or enhanced. There are plenty of things that will squash it, and so avoiding those is a good thing if creativity is actually what you're looking for in the first place. But gain-of-function in this area is hard to achieve: taking a more-or-less normal individual, group, or company and somehow ramping up their creative forces is something that I don't think anyone really knows how to do.

That point I made in passing there is worth coming back to. Not everyone who says that they value rule-breaking disruptive creative types really means it, you know. "Creative" is often used as a feel-good buzzword; the sort of thing that companies know that they're supposed to say that they are and want to be.

"Innovative" works the same way, and there are plenty of others, which can be extracted from any mission statement that you might happen to have lying around. I think those belong in the same category as the prayers of Abner Scofield. He's the coal dealer in Mark Twain's "Letter to the Earth", and is advised by a recording angel that: "Your remaining 401 details count for wind only. We bunch them and use them for head winds in retarding the ships of improper people, but it takes so many of them to make an impression that we cannot allow anything for their use". Just so.

Comments (32) + TrackBacks (0) | Category: Who Discovers and Why

August 19, 2013

An Inspirational Quote from Bernard Munos

Email This Entry

Posted by Derek

In the comments thread to this post, Munos has this to say:

Innovation cannot thrive upon law and order. Sooner or later, HR folks will need to come to grips with this. Innovators (the real ones) are rebels at heart. They are not interested in growing and nurturing existing markets beccause they want to obliterate and replace them with something better. They don't want competitive advantage from greater efficiency, because they want to change the game. They don't want to optimize, they want to disrupt and dominate the new markets they are creating. The most damaging legacy of the process-minded CEOs who brought us the innovation crisis has been to purge disrupters from the ranks of pharma. Yes, they are tough to manage, but every innovative company needs them, and must create a climate that allows them to thrive. . .

I wanted to bring that up to the front page, because I enjoy hearing things like this, and I hope that they're true.

Comments (73) + TrackBacks (0) | Category: Who Discovers and Why

August 7, 2013

Reworking Big Pharma

Email This Entry

Posted by Derek

Bruce Booth (of Atlas Venture Capital) has a provocative post up at Forbes on what he would do if he were the R&D head of a big drug company. He runs up his flag pretty quickly:

I don’t believe that we will cure the Pharma industry of its productivity ills through smarter “operational excellence” approaches. Tweaking the stage gates, subtly changing attrition curves, prioritizing projects more effectively, reinvigorating phenotypic screens, doing more of X and less of Y – these are all fine and good, and important levers, but they don’t hit the key issue – which is the ossified, risk-avoiding, “analysis-paralysis” culture of the modern Pharma R&D organization.

He notes that the big companies have all been experimenting with ways to get more new thinking and innovation into their R&D (alliances with academia, moving people to the magic environs of Cambridge (US or UK), and so on). But he's pretty skeptical about any of this working, because all of this tends to take place out on the edges. And what's in the middle? The big corporate campus, which he says "has become necrotic in many companies". What to do with it? He has several suggestions, but here's a big one. Instead of spending five or ten per cent of the R&D budget on out-there collaborations, why not, he says, go for broke:

Taken further, bringing the periphery right into the core is worth considering. This is just a thought experiment, and certainly difficult to do in practice, but imagine turning a 5000-person R&D campus into a vibrant biotech park. Disaggregate the research portfolio to create a couple dozen therapeutically-focused “biotech” firms, with their own CEOs, responsible for a 3-5 year plan and with a budget that maps to that plan. Each could have its own Board and internal/external advisors, and flexibility to engage free market service providers outside the biotech park. Invite new venture-backed biotechs and CROs to move into the newly rebranded biotech park, incentivized with free lab space, discounted leases, access to subsidized research capabilities, or even unencumbered matching grants. Put some of the new spin-outs from their direct academic initiatives into the mix. But don’t put strings on those new externally-derived companies like the typical Pharma incubator; these will constrain the growth of these new companies. Focus this big initiative on one simple benefit: strategic proximity to a different culture.

His second big recommendation is "Get the rest of the company out of research's way". And by that, he especially means the commercial part of the organization:

One immediate solution would be to kick Commercial input out of decision-making in Research. Or, more practically, at least reduce it dramatically. Let them know that Research will hand them high quality post-PoC Phase 3-ready programs addressing important medical needs. Remove the market research gates and project NPV assessment models from critical decision-making points. Ignore the commercially-defined “in” vs “out” disease states that limit Research teams’ degrees of freedom. Let the science and medicine guide early program identification and progress. . .If you don’t trust the intellect of your Research leaders, then replace them. But second-guessing, micro-managing, and over-analyzing doesn’t aid in the exploration of innovation.

His last suggestion is to shake up the Board of Directors, and whatever Scientific Advisory Board the company has:

Too often Pharma defaults to not engaging the outside because “they know their programs best” or for fear of sharing confidential information that might leak to its competition. Reality is the latter is the least of their worries, and I’ve yet to hear this as being a source of profound competitive intelligence leakage. A far worse outcome is unchallenged “group think” about the merits (or demerits) of a program and its development strategy. Importantly, I’m not talking about specific Key Opinion Leader engagement on projects, as most Pharma companies do this effectively already. I’m referring to a senior, strategic, experienced advisory function from true practitioners in the field to help the R&D leadership team get a fresh perspective.

This is part of the "get some outside thinking" that is the thrust of his whole article. I can certainly see where he's coming from, and I think that this sort of thing might be exactly what some companies need. But what are the odds of (a) their realizing that and (b) anything substantial being done about it? I'm not all that optimistic - and, to be sure, Booth's article also mentions that some of these ideas might well be unworkable in practice.

I think that's because there's another effect that all of Bruce's recommendations have: they decrease the power and influence of upper management. Break up your R&D department, let in outside thinking, get your people to strike out pursuing their own ideas. . .all of those cut into the duties of Senior Executive Vice Presidents of Strategic Portfolio Planning, you know. Those are the sorts of people who will have to sign off on such changes, or who will have a chance to block them or slow their implementation. You'll have to sneak up on them, and there might not be enough time to do that in some of the more critical cases.

Another problem is what the investors would do if you tried some of the more radical ideas. As the last part of the post points out, we have a real problem in this business with our relationship with Wall Street. The sorts of people who want quarter-by-quarter earnings forecasts would absolutely freak if you told them that you were tearing the company up into a pile of biotechs. (And by that, I mean tearing it up for real, not created centers-of-innovation-excellence or whatever the latest re-org chart might call it). It's hard to think of a good way out of that one, too, for a large public company.

Now, there are people out there who have enough nerve and enough vision to try some things in this line, and once in a while you see it happen. But inertial forces are very strong indeed. With some organizations, it might be less work to just start over, rather than to spend all that effort tearing down the things you want to get rid of. For all I know, this is what (say) AstraZeneca has in mind with its shakeup and moving everyone to Cambridge. But what systems and attitudes are going to be packed up and moved over along with all the boxes of lab equipment?

Comments (39) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

July 17, 2013

The GSK Jackpot

Email This Entry

Posted by Derek

Well, this got my attention: according to the Sunday Times, GlaxoSmithKline is preparing to hand out hefty bonus payments to scientists if they have a compound approved for sale. Hefty, in this context, means up to several million dollars. The earlier (and much smaller) payouts for milestones along the way will disappear, apparently, to be replaced by this jackpot.

The article says that "The company will determine who is entitled to share in the payout by judging which staff were key to its discovery and development", and won't that be fun? In Germany, the law is that inventors on a corporate patent do get a share of the profits, which can be quite lucrative, but it means that there are some very pointed exchanges about just who gets to be an inventor. The prospect of million-dollar bonuses will be very welcome, but will not bring the best in some people, either. (It's not clear to me, though, if these amounts are to be split up among people somehow, or if single individuals can possibly expect that much).

John LaMattina has some thoughts on this idea here. He's also wondering how to assign credit:

I am all for recognizing scientists in this way. After all, they must be successful in order for a company the size of GSK to have a sustaining pipeline. However, the drug R&D process is really a team effort and not driven by an individual. The inventor whose name is on the patent is generally the chemist or chemists who designed the molecule that had the necessary biological activity. Rarely, however, are chemists the major contributor to the program’s success. Oftentimes, it is a biologist who conceives the essence of the program by the scientific insight he or she might have. The discovery of Pfizer’s Xeljanz is such a case. There have been major classes of drugs that have been saved by toxicologists who ran insightful animal experiments to explain aberrant events in rats as was done by Merck with both the statins and proton-pump inhibitors – two of the biggest selling classes of drugs of all time.

On occasion, the key person in a drug program is the process chemist who has designed a synthesis of the drug that is amenable to the large scales of material needed to conduct clinical trials. Clinical trial design can also be crucial, particularly when studying a drug with a totally new mechanism of action. A faulty trial design can kill any program. Even a nurse involved in the testing of a drug can make the key discovery, as happened in Pfizer’s phase 1 program with Viagra, where the nurse monitoring the patients noticed that the drug was enhancing blood flow to an organ other than the heart. To paraphrase Hilary Clinton, it takes a village to discover and develop a drug.

You could end up with a situation where the battery is arguing with the drive shaft, both of whom are shouting at the fuel pump and refusing to speak to the tires, all because there was a reward for whichever one of them was the key to getting the car to go down the driveway.

There's another problem - getting a compound to go all the way to the market involves a lot of luck as well. No one likes to talk about that very much - it's in everyone's interest to show how it was really due to their hard work and intelligence - but equal amounts of hard work and brainpower go into projects that just don't make it. Those are necessary, but not sufficient. So if GSK is trying to put this up as an incentive, it's only partially coupled to factors that the people it's aimed at can influence.

And as LaMattina points out, the time delay in getting drugs approved is another factor. If I discover a great new compound today, I'll be lucky to see it on the market by, say, 2024 or so. I have no objection to someone paying me a million dollars on that date, but it won't have much to do with what I've been up to in the interim. And in many cases, some of the people you'd want to reward aren't even with the company by the time the drug makes it through, anyway. So while I cannot object to drug companies wanting to hand out big money to their scientists, I'm not sure what it will accomplish.

Comments (71) + TrackBacks (0) | Category: Business and Markets | Drug Development | Who Discovers and Why

June 14, 2013

One. . .Million. . .Pounds (For a New Antibiotic?)

Email This Entry

Posted by Derek

Via Stuart Cantrill on Twitter, I see that UK Prime Minister David Cameron is prepared to announce a prize for anyone who can "identify and solve the biggest problem of our time". He's leaving that open, and his examples are apparently ". . .the next penicillin, aeroplane or world wide web".

I like the idea of prizes for research and invention. The thing is, the person who invents the next airplane or World Wide Web will probably do pretty well off it through the normal mechanisms. And it's worth thinking about the very, very different pathways these three inventions took, both in their discovery and their development. While thinking about that, keep in mind the difference between those two.

The Wright's first powered airplane, a huge step in human technology, was good for carrying one person (lying prone) for a few hundred yards in a good wind. Tim Berners-Lee's first Web page, another huge step, was a brief bit of code on one server at CERN, and mostly told people about itself. Penicillin, in its early days, was famously so rare that the urine of the earliest patients was collected and extracted in order not to waste any of the excreted drug. And even that was a long way from Fleming's keen-eyed discovery of the mold's antibacterial activity. A more vivid example than penicillin of the need for huge amounts of development from an early discovery is hard to find.

And how does one assign credit to the winner? Many (most) of these discoveries take a lot of people to realize them - certainly, by the time it's clear that they're great discoveries. Alexander Fleming (very properly) gets a lot of credit for the initial discovery of penicillin, but if the world had depended on him for its supply, it would have been very much out of luck. He had a very hard time getting anything going for nearly ten years after the initial discovery, and not for lack of trying. The phrase "Without Fleming, no Chain; without Chain, no Florey; without Florey, no Heatley; without Heatley, no penicillin" properly assigns credit to a lot of scientists that most people have never heard of.

Those are all points worth thinking about, if you're thinking about Cameron's prize, or if you're David Cameron. But that's not all. Here's the real kicker: he's offering one million pounds for it ($1.56 million as of this morning). This is delusional. The number of great discoveries that can be achieved for that sort of money is, I hate to say, rather small these days. A theoretical result in math or physics might certainly be accomplished in that range, but reducing it to practice is something else entirely. I can speak to the "next penicillin" part of the example, and I can say (without fear of contradiction from anyone who knows the tiniest bit about the subject) that a million pounds could not, under any circumstances, tell you if you had the next penicillin. That's off by a factor of a hundred, if you just want to take something as far as a solid start.

There's another problem with this amount: in general, anything that's worth that much is actually worth a lot more; there's no such thing as a great, world-altering discovery that's worth only a million pounds. I fear that this will be an ornament around the neck of whoever wins it, and little more. If Cameron's committee wants to really offer a prize in line with the worth of such a discovery, they should crank things up to a few hundred million pounds - at least - and see what happens. As it stands, the current idea is like me offering a twenty-dollar bill to anyone who brings me a bar of gold.

Comments (28) + TrackBacks (0) | Category: Current Events | Drug Industry History | Infectious Diseases | Who Discovers and Why

May 2, 2013

E. O. Wilson's "Letters to a Young Scientist"

Email This Entry

Posted by Derek

I've been reading E. O. Wilson's new book, Letters to a Young Scientist. It's the latest addition to the list of "advice from older famous scientists" books, which also includes Peter Medawar's similarly titled Advice To A Young Scientist and what is probably the grandfather of the entire genre, Ramón y Cajal's Advice for a Young Investigator. A definite personal point of view comes across in this one, since its author is famously unafraid to express his strongly held opinions. There's some 100-proof Wilson in this book as well:

. . .Science is the wellspring of modern civilization. It is not just "another way of knowing", to be equated with religion or transcendental meditation. It takes nothing away from the genius of the humanities, including the creative arts. Instead it offers ways to add to their content. The scientific method has been consistent better than religious beliefs in explaining the origin and meaning of humanity. The creation stories of organized religions, like science, propose to explain the origin of the world, the content of the celestial sphere, and even the nature of time and space. These mythic accounts, based mostly on the dreams and epiphanies of ancient prophets, vary from one religion's belief to another. Colorful they are, and comforting to the minds of believers, but each contradicts all the others. And when tested in the real world they have so far proved wrong, always wrong.

And that brings up something else about all the books of this type: they're partly what their titles imply, guides for younger scientists. They're partly memoirs of their authors' lives (Francis Crick's What Mad Pursuit is in this category, although it has a lot of useful advice itself). And they're all attempts to explain what science really is and how it really works, especially to readers who may well not be scientists themselves.

Wilson does some of all three here, although he uses examples from his own life and research mainly as examples of the advice he's giving. And that advice, I think, is almost always on target. He has sections on how to pick areas of research, methods to use for discovery, how to best spend your time as a scientist, and so on. The book is absolutely, explicitly aimed at those who want to make their mark by discovering new things, not at those who would wish to climb other sorts of ladders. (For example, he tells academic scientists "Avoid department-level administration beyond thesis committee chairmanships if at all fair and possible. Make excuses, dodge, plead, trade." If your ambition is to become chairman of the department or a VP of this or that, this is not the book to turn to.

But I've relentlessly avoided being put onto the managerial track myself, so I can relate to a lot of what this book has to say. Wilson spent his life at Harvard, so much of his advice has an academic slant, but the general principles of it come through very clearly. Here's how to pick an area to concentrate on:

I believe that other experienced scientists would agree with me that when you are selecting a domain of knowledge in which to conduct original research, it is wise to look for one that is sparsely inhabited. . .I advise you to look for a chance to break away, to find a subject you can make your own. . .if a subject is already receiving a great deal of attention, if it has a glamorous aura, if its practitioners are prizewinners who receive large grants, stay away from that subject.

One of the most interesting parts of the book for me is its take on two abilities that most lay readers would take as prerequisites for a successful scientist: mathematical ability and sheer intelligence in general. The first is addressed very early in the book, in what may well become a famous section:

. . .If, on the other hand, you are a bit short in mathematical training, even very short, relax. You are far from alone in the community of scientists, and here is a professional secret to encourage you: many of the most successful scientists in the world today are mathematically no more than semiliterate.

He recommends making up this deficiency, as much as you find it feasible to do so, but he's right. The topic has come up around here - I can tell you for certain that the math needed to do medicinal chemistry is not advanced, and mostly consists of being able to render (and understand) data in a variety of graphical forms. If you can see why a log/log plot tends to give you straightened-out lines, you've probably got enough math to do med-chem. You'll also need to understand something about statistics, but (again) mostly in how to interpret it so you aren't fooled by data. Pharmacokinetics gets a bit more mathematical, and (naturally) molecular modeling itself is as math-heavy as anyone could want, but the chemistry end of things is not.

As for intelligence, see what you think about this:

Original discoveries cannot be made casually, not by anyone at any time or anywhere. The frontier of scientific knowledge, often referred to as the cutting edge, is reached with maps drawn by earlier investigators. . .But, you may well ask, isn't the cutting edge a place only for geniuses? No, fortunately. Work accomplished on the frontier defines genius, not just getting there. In fact, both accomplishments along the frontier and the final eureka moment are achieved more by entrepreneurship and hard work than by native intelligence. This is so much the case that in most fields most of the time, extreme brightness may be a detriment. It has occurred to me, after meeting so many successful researchers in so many disciplines, that the ideal scientist is smart only in an intermediate degree: bright enough to see what can be done but not so bright as to become bored doing it.

By "entrepreneurship", he doesn't mean forming companies. That's Wilson's term for opportunistic science - setting up some quick and dirty experiments around a new idea to see what might happen, and being open to odd results as indicators of a new direction to take your work. I completely endorse that, in case anyone cares. As for the intelligence part, you have to keep in mind that this is E. O. Wilson telling you that you don't need to be fearsomely intelligent to be successful, and that his scale for evaluating this quality might be calibrated a bit differently from the usual. As Tom Wolfe put it in his essay in Hooking Up, one of Wilson's defining characteristics has been that you could put him down almost anywhere on Earth and he'd be the smartest person in the room. (I should note that Wolfe's essay overall is not exactly a paean, but he knows not to underestimate the guy).

I think that intelligence falls under the "necessary but not sufficient" heading. And I probably haven't seen that many people operate whom the likes of E. O. Wilson would consider extremely smart, so I can't comment much on what happens at that end of the scale. But the phenomenon of people who score very highly on attempted measures of intelligence, but never seem to make much of themselves, is so common as to be a cliché. You cannot be dumb and make a success of yourself as a research scientist. But being smart guarantees nothing.

As an alternative to mathematical ability and (very) high intelligence, Wilson offers the prescription of hard work. "Scientists don't take vacations", he says, they take field trips. That might work out better if you're a field biologist, but not so well for (say) organic chemistry. And actually, I think that clearing your head with some time off actually can help out a great deal when you're bogged down in some topic. But having some part of your brain always on the case really is important. Breaks aside, long-term sustained attention to a problem is worth a lot, and not everyone is capable of it.

Here's more on the opportunistic side of things:

Polymer chemistry, computer programs of biological processes, butterflies of the Amazon, galactic maps, and Neolithic sites in Turkey are the kinds of subjects worthy of a lifetime of devotion. Once deeply engaged, a steady stream of small discoveries is guaranteed. But stay alert for the main chance that lies to the side. There will always be the possibility of a major strike, some wholly unexpected find, some little detail that catches your peripheral attention that might very well, if followed, enlarge or even transform the subject you have chosen. If you sense such a possibility, seize it. In science, gold fever is a good thing.

I know exactly what he's talking about here, and I think he's completely right. Many, many big discoveries have their beginnings in just this sort of thing. Isaac Asimov was on target when he said that the real sound of a breakthrough was not the cry of "Eureka!" but a puzzled voice saying "Hmm. That's funny. . ."

Well, the book has much more where all this comes from. It's short, which tempts a person to read through it quickly. I did, and found that this slighted some of the points it tries to make. It improved on a second pass, in my case, so you may want to keep this in mind.

Comments (17) + TrackBacks (0) | Category: Book Recommendations | Who Discovers and Why

April 29, 2013

Just Work on the Winners

Email This Entry

Posted by Derek

That Lamar Smith proposal I wrote about earlier this morning can be summarized as "Why don't you people just work on the good stuff?" And I thought it might be a good time to link back to a personal experience I had with just that worldview. As you'll see from that story, all they wanted was for us to meet the goals that we put down on our research goals forms. I was told, face to face, that the idea was that this would make us put our efforts into the projects that were most likely to succeed. Who could object to that? Right?

But since we here in the drug industry are so focused on making money, y'know, you'd think that we would have even more incentives to make sure that we're only working on the things that are likely to pay off. And we can't do it. Committees vet proposals, managers look over progress reports, presentations are reviewed and data are sifted, all to that end, because picking the wrong project can sink you good and proper, while picking the right one can keep you going for years to come. But we fail all the time. A good 90% of the projects that make it into the clinic never make it out the other end, and the attrition even before getting into man is fierce indeed. We back the wrong horses for the best reasons available, and sometimes we back the right ones for reasons that end up evaporating along the way. This is the best we can do, the state of the art, and it's not very good at all.

And that's in applied research, with definite targets and endpoints in mind the whole way through. Now picture what it's like in the basic research end of things, which is where a lot of NSF and NIH money is (and should be) going. It is simply not possible to say where a lot of these things are going, and which ones will bear fruit. If you require everyone to sign forms saying that Yes, This Project Has Immediate Economic and National Security Impact, then the best you can hope for is to make everyone lie to you.

Update: a terrific point from the comments section: "(This) argument was often made when firms were reducing costs by shutting down particular pieces of R&D. The general idea was that the firm would stop doing the things that were unlikely to work, and focus more on the things that would work, and hence improve financial returns on R&D. This argument is implausible because successful R&D is wildly profitable. Financial returns are only dragged down by the things that don't work. Therefore, any company that could REALLY distinguish with any precision between winners and losers on a prospective basis should double or triple its R&D investment, and not cut it."

Comments (13) + TrackBacks (0) | Category: Current Events | Who Discovers and Why

April 3, 2013

AstraZeneca's Move To Hot, Happening Cambridge

Email This Entry

Posted by Derek

If you're looking for a sunny, optimistic take on AstraZeneca's move to Cambridge in the UK, the Telegraph has it for you right here. It's a rousing, bullish take on the whole Cambridge scene, but as John Carroll points out at FierceBiotech, it does leave out a few things about AZ. First, though, the froth:

George Freeman MP. . . the Coalition's adviser on life sciences, and Dr Andy Richards, boss of the Cambridge Angels, who has funded at least 20 of the city's start–ups, are among its champions.

"The big pharmaceutical model is dead, we have to help the big companies reinvent themselves," said Freeman. "Cambridge is leading the way on how do this, on research and innovation."

The pair are convinced that the burgeoning "Silicon Fen" is rapidly becoming the global centre of pharma, biotech, and now IT too. Richards says the worlds of bioscience and IT are "crashing together" and revolutionising companies and consumers. Tapping his mobile phone, he says: "This isn't just a phone, it could hold all sorts of medical information, too, on your agility and reactions. This rapid development is what it's all about."

. . .St John's College set up another park where Autonomy started and more than 50 companies are now based. As we pass, on cue a red Ferrari zooms out. "We didn't see Ferraris when I was a boy," says Freeman. "Just old academics on their bikes."
He adds: "That's the great thing about tech, you can suddenly get it, make it commercial and you've got £200m. You don't have to spend four generations of a German family building Mittelstand."

I don't doubt that Cambridge is doing well. There are a lot of very good people in the area, and some very good ideas and companies. But I do doubt that Cambridge is becoming the global hub of pharma, biotech, and IT all at the same time. And that "crashing together" stuff is the kind of vague rah-rah that politicians and developers can spew out on cue. It sounds very exciting until you start asking for details. And it's not like they haven't heard that sort of thing before in Britain. Doesn't anyone remember the "white heat" of the new technological revolution of the 1960s?

But the future of Cambridge and the future of AstraZeneca may be two different things. Specifically, Pascal Soirot of AZ is quoted in the Telegraph piece as saying that "We've lost some of our scientific confidence," and that the company is hoping to get it back by moving to the area. Let's take a little time to think about that statement, because the closer you look at it, the stranger it is. It assumes that (A) there is such a thing as "scientific confidence", and (B) that it can be said to apply to an entire company, and (C) that a loss of it is what ails AstraZeneca, and (D) that one can retrieve it by moving the whole R&D site to a hot site.

Now, assumption (A) seems to me to be the most tenable of the bunch. I've written about that very topic here. It seems clear to me that people who make big discoveries have to be willing to take risks, to look like fools if they're wrong, and to plunge ahead through their own doubts and those of others. That takes confidence, sometimes so much that it rubs other people the wrong way.

But do these traits apply to entire organizations? That's assumption (B), and there things get fuzzy. There do seem to be differences in how much risk various drug discovery shops are willing to take on, but changing a company's culture has been the subject of so many, many management books that it's clearly not something that anyone knows how to do well. The situation is complicated by the disconnects between the public statements of higher executives about the spirits and cultures of their companies, versus the evidence on the ground. In fact, the more time the higher-ups spend talking about how incredibly entrepreneurial and focused everyone at the place is, the more you should worry. If everyone's really busy discovering things, you don't have time to wave the pom-poms.

Now to assumption (C), the idea that a lack of such confidence is AstraZeneca's problem. Not being inside the company, I can't speak to that directly, but from outside, it looks like AZ's problem is that they've had too many drugs fail in Phase III and that they've spent way too much money doing it. And it's very hard to say how much of that has been just bad luck, how much of it was self-deception, how much can be put down to compound selection or target selection issues, and so on. Lack of scientific confidence might imply that the company was too cautious in some of these areas, taking too long for things that wouldn't pay off enough. I don't know if that's what Pascal Soirot is trying to imply; I'm not all that sure that he knows, himself.

This brings us to assumption (D), Getting One's Mojo Back through a big move. I have my suspicions about this strategy from the start, since it's the plot of countless chick-lit books and made-for-cable movies. But I'll wave away the fumes of incense and suntan oil, avert my eyes from the jump cuts of the inspirational montage scenes, and move on to asking how this might actually work. You'd think that I might have some idea, since I actually work in Cambridge in the US, where numerous companies are moving in for just these sorts of stated reasons. They're not totally wrong. Areas like the two Cambridges, the San Francisco Bay area, and a few others do have things going for them. My own guess is that a big factor is the mobility and quality of the local workforce, and that the constant switching around between the various companies, academic institutions, and other research sites keeps things moving, intellectually. That's a pretty hand-waving way of putting it, but I don't have a better one.

What could be an even bigger factor is a startup culture, the ability of new ideas to get a hearing and get some funding in the real world. That effect, though, is surely most noticeable in the smaller company space - I'm still not sure how it works out for the branch offices of larger firms that locate in to be where things are happening. If I had to guess, I think all these things still help out the larger outfits, but in an attenuated way that is not easy to quantify. And if the culture of the Big Company Mothership is nasty enough to start with, I'm sure it can manage to cancel out whatever beneficial effects might exist.

So I don't know what moving to Cambridge to a big new site is going to do for AstraZeneca. And it's worth remembering that it's going to take several years for any such move to be realized - who knows what will happen between now and then? The whole thing might help, it might hurt, it might make little difference (except in the massive cost and disruption). That disruption might be a feature as much as a bug - if you're trying to shake a place up, you have to shake it up - but I would wonder about anyone who feels confident about how things will actually work out.

Comments (33) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

March 20, 2013

Thought for the Day, On Interdisciplinary Research

Email This Entry

Posted by Derek

This quote caught my eye fromNature's "Trade Secrets" blog, covering a recent conference. Note that the Prof. Leggett mentioned is a 2003 Nobel physics laureate:

It’s been a recent trend to mix disciplines and hope the results will solve some of science’s stickier problems. But is it possible the pendulum has swung too far? Leggett told the audience the term ‘interdisciplinarity’ is often “abused.”

“I don’t myself feel it is a good thing for government committees and so forth to encourage interdisciplinarity for its own sake. Some of these committees – at least in my experience – seem to be under the impression that interdisciplinarity is a sort of sauce, which you can put on otherwise unpromising ingredients, to improve the whole collection,” Prof. Leggett said. “I don’t really think that is right. The problem with that kind of approach is that sometimes people get the impression that simply to attack a problem in biology for the sake of attacking a problem in biology is itself a virtue.”

It's interesting that Leggett would use biology as an example. There's been a long history of physics/biology crossovers, going back to Schrödinger's What is Life?: and George Gamow's interest in DNA. Francis Crick originally studied physics, and Richard Feynman did very good work on sabbatical in Max Delbrück's lab. (Here's a rundown of these and other connections).

But Leggett does indeed have a good point, one that applies to all sorts of other "magic recipes" for inducing creativity. If we knew how to induce that, we'd have a hell of a lot more of it, has always been my opinion. A lot of great things have come out of the borderlands between two sciences, but just the fact that you're going out into those territories doesn't assure you of a thing.

Comments (22) + TrackBacks (0) | Category: Who Discovers and Why

March 1, 2013

Yuri Milner's Millions, And Where They're Going

Email This Entry

Posted by Derek

You'll have heard about Yuri Milner, the Russian entrepreneur (early Facebook investor, etc.) who's recently announced some rather generous research prize awards:

Yesterday, Milner, along with some “old friends”—Google cofounder Sergey Brin, Facebook CEO Mark Zuckerberg, and their respective wives—announced they are giving $33 million in prizes to 11 university-based biologists. Five of the awards, called the Breakthrough Prize in Life Sciences, will be given annually going forward; they are similar to prizes for fundamental physics that Milner started giving out last year.

At $3 million apiece, the prize money tops the Nobels, whose purse is around $1 million. Yet neither amount is much compared to what you can make if you drop out of science and find a calling in Silicon Valley, as Brin, Milner, Zuckerberg did.

Technology Review has a good article on the whole effort. After looking over the awardees, Antonio Regalado has some speculation:

But looking over the list (the New York Times published it along with some useful biographical details here), I noticed some very strong similarities between the award winners. Nearly all are involved in studying cancer genetics or cancer stem cells, and sometimes both.

In other words, this isn’t any old list of researchers. It’s actually the scientific advisory board of Cure for Cancer, Inc. Because lately, DNA sequencing and better understanding of stem cells have become the technologies that look most likely to maybe, just maybe, point toward some real cancer cures.

Wouldn't surprise me. This is a perfectly good area of research for targeted funding, and a good infusion of cash is bound to help move things along. The article stops short of saying that Milner (or someone he knows) might have a personal stake in all this, but that wouldn't be the first time that situation has influenced the direction of research, either. I'm fine with that, actually - people have a right to do what they want to with their own money, and this sort of thing is orders of magnitude more useful than taking the equivalent pile of money and buying beachfront mansions with it. (Or a single beachfront mansion, come to think of it, depending on what market we're talking about).

I've actually been very interested in seeing how some of the technology billionaires have been spending their money. Elon Musk, Jeff Bezos, David Page, Sergey Brin, etc., have been putting some money behind some very unusual ventures, and I'm very happy to see them do it. If I were swimming in that kind of cash, I'd probably be bankrolling my own space program or something, too. Of course, those sorts of ideas are meant to eventually turn a profit. In that space example, you have tourism, launch services, asteroid mining, orbiting solar power, and a lot of other stuff familiar to anyone who ever read an old John W. Campbell editorial.

What about the biopharma side? You can try to invest to make money there, but it's worth noting that not a lot of tech-era money has gone into venture capital in this area. Are we going to see more of it going as grants to academia? If so, that says something about the state of the field, doesn't it? Perhaps the thinking is that there's still so much basic science to be learned that you get more for your dollar investing in early research - at least, it could lead to something that's a more compelling venture. And I'd be hard pressed to argue.

Comments (20) + TrackBacks (0) | Category: Academia (vs. Industry) | Who Discovers and Why

February 21, 2013

An Anniversary

Email This Entry

Posted by Derek

I wanted to repost an old entry of mine, from back in 2002 (!) It's appropriate this week, and just as I was in 2002, I'm a couple of days late with the commemeration:

I missed a chance yesterday to note an anniversary. Giordano Bruno was something of a crank, not normally the sort of person I'd be commemorating. But in his time, it didn't take very much to be considered either of those, or worse, and we have to make allowances.

He was headstrong. We can see now that he was sometimes eerily right, other times totally wrong. Either way, many of these strongly held positions were sure sources of trouble for anyone who advocated them. All living things were made up of matter, and that matter was the same across the universe - that one was not going to go over well in the late 16th century.

There was more. The stars, he said, were nothing more than other suns, and our sun was nothing more than a nearby star. He saw no reason why these other suns should not have planets around them, and no reason why those planets should not have life: "Innumerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds."

He went on at length. And as I said, much of it was, by scientific standards, mystical rot. His personality was no help whatsoever in getting his points across. He appears to have eventually gotten on the nerves of everyone he dealt with. But no one deserves to pay what he did for it all.

Bruno was excommunicated and hauled off in chains. He spent the next several years in prison, and was given chances to recant up until the very end. He refused. On February 19th, 1600, he was led into the Campo dei Fiori plaza in Rome, tied to a post, and burned to death in front of a crowd.

Mystic, fool, pain in the neck. I went out tonight to see Saturn disappear behind the dark edge of the moon, putting the telescope out on the driveway and calling my wife out to see. Then I came inside, sat down at my computer, wrote exactly what I thought, and put it out for anyone who wanted to read it around the world. While I did all that, I remembered that things haven't always been this way, haven't been this way for long at all, actually. And resolved to remember to enjoy it all as much as I can, and to remember those who never got to see it.

Comments (6) + TrackBacks (0) | Category: Who Discovers and Why

December 7, 2012

Whitesides on Discovery and Development

Email This Entry

Posted by Derek

George Whitesides of Harvard has a good editorial in the journal Lab on a Chip. He's talking about the development of microassays, but goes on to generalize about the new technologies - how they're found, and how they're taken up (or not) by a wider audience (emphasis mine below):

Lab-on-a-chip (LoC) devices were originally conceived to be useful–that is, to solve problems. For problems in analysis or synthesis (or for other applications, such as growing cells or little animals) they would be tiny – the “microcircuits of the fluidic world.” They would manipulate small volumes of scarce samples, with low requirements for expensive space, reagents and waste. They would save cost and time. They would allow parallel operation. Sensible people would flock to use such devices.

Sensible and imaginative scientists have, in fact, flocked to develop such devices, or what were imagined to be such devices, but users have not yet flocked to solve problems with them. “Build it, and they will come” has not yet worked as a strategy in LoC technology, as it has, say, with microprocessors, organic polymers and gene sequencers. Why not? One answer might seem circular, but probably is not. It is that the devices that have been developed have been elegantly imagined, immensely stimulating in their requirements for new methods of fabrication, and remarkable in their demonstrations of microtechnology and fluid physics, but they have not solved problems that are otherwise insoluble. Although they may have helped the academic scientist to produce papers, they have not yet changed the world of those with practical problems in microscale analysis or manipulation.

Where is the disconnect? One underlying problem has been remarked upon by many people interested in new technology. Users of technology are fundamentally not interested in technology—they are interested in solving their own problems. They want technology to be simple and cheap and invisible. Developers of technology, especially in universities, are often fundamentally not interested in solving real problems—they are interested in the endlessly engaging activity of building and exercising new widgets. They want technology to be technically very cool. “Simple/cheap/invisible” and “technically cool” are not exclusive categories, but they are certainly not synonymous.

That is a constant and widespread phenomenon. There are people who want to be able to do things with stuff, and people who want stuff to do things for them, and the overlap between those two is not always apparent. What happens over time, though, in the best cases, is that the tinkerers come up with things that can be used by a wider audience to solve their own problems. Look no further than the personal computer industry for one of the biggest examples ever. If you didn't live through it, you might not realize how things went from "weird hobbyist thingies" to "neat gizmos if you have the money" to "essential parts of everyday life". Here's Whitesides again:

Here are three useful, homely, rules of thumb to remember in developing products.

• The ratio of money spent to invent something, to make the invention into a prototype product, to develop the prototype to the point where it can be manufactured, and to manufacture and sell it at a large scale is, very qualitatively, 1:10:100:1000. We university folks—the inventors at the beginning of the path leading to products—are cheap dates.

• You don't really know you have solved the problem for someone until they like your solution so much they're willing to pay you to use it. Writing a check is a very meaningful human interaction.

• If the science of something is still interesting, the “something” is probably not ready to be a product.

His second rule reminds me of Stephen King's statement on whether someone has any writing talent or not: "If you wrote something for which someone sent you a check, if you cashed the check and it didn't bounce, and if you then paid the light bill with the money, I consider you talented". It's also the measure of success in the drug industry - we are, after all, trying to make things that are useful enough that people will pay us money for them. If we don't come up with enough of those things, or if they don't bring in enough money to cover what it took to find them, then we are in trouble indeed.

More comments on the Whitesides piece here. For scientists (like me, and many readers of the blog), these points are all worth keeping in mind. Some of our biggest successes are things where our contributions are invisible to the end users. . .

Comments (24) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

November 30, 2012

A Broadside Against The Way We Do Things Now

Email This Entry

Posted by Derek

There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):

The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?

He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:

". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.

This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.

See if this might sound familiar to anyone:

". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.

But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?

"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.

What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.

This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?

And as for those times when we do have mode-of-action information, well, here are some fighting words for you:

". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.

I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.

I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:

". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.

That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.

What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:

In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.

One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why

November 13, 2012

Nassim Taleb on Scientific Discovery

Email This Entry

Posted by Derek

There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).

Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.

So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).

Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.

An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.

Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.

The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.

Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.

Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.

(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.

(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.

And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.

(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).

(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.

At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.

(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.

(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.

(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.

Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.

Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.

I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 9, 2012

The Age of Nobel Chemistry Laureates

Email This Entry

Posted by Derek

In anticipation of tomorrow's Nobel Prize, here's a graph of the average age of Nobel chemistry laureates. (Link via Stuart Cantrill). It runs about like you'd figure - lots of people in their 50s, which should make some of us feel good, I suppose (!) I'd like to see this charted over time to see if there are any trends that way. Update - I should scroll down more! They have that data at the link above. Note also that chemistry is still one of the "younger" disciplines by average age. . . We already know a bit about changes in the ages of grantees and highly-cited papers; it would be interesting to see if that shows up in the Nobel data as well. . .

Comments (10) + TrackBacks (0) | Category: Who Discovers and Why

August 27, 2012

Chemistry's Mute Black Swans

Email This Entry

Posted by Derek

What's a Black Swan Event in chemistry? Longtime industrial chemist Bill Nugent has a very interesting article in Angewandte Chemie with that theme, and it's well worth a look. He details several examples of things that all organic chemists thought they knew that turned out not to be so, and traces the counterexamples back to their first appearances in the literature. For example, the idea that gold (and gold complexes) were uninteresting catalysts:

I completed my graduate studies with Prof. Jay Kochi at Indiana University in 1976. Although research for my thesis focused on organomercury chemistry, there was an active program on organogold chemistry, and our perspective was typical for its time. Gold was regarded as a lethargic and overweight version of catalytically interesting copper. More- over, in the presence of water, gold(I) complexes have a nasty tendency to disproportionate to gold(III) and colloidal gold(0). Gold, it was thought, could provide insight into the workings of copper catalysis but was simply too inert to serve as a useful catalyst itself. Yet, during the decade after I completed my Ph.D. in 1976 there were tantalizing hints in the literature that this was not the case.

One of these was a high-temperature rearrangement reported in 1976, and there was a 1983 report on gold-catalyzed oxidation of sulfides to sulfoxides. Neither of these got much attention, as the Nugent's own chart of the literature on the subject shows. (I don't pay much attention when someone oxidizes a sulfide, myself). Apparently, though, a few people had reason to know that something was going on:

However, analytical chemists in the gold-mining industry have long harnessed the ability of gold to catalyze the oxidation of certain organic dyes as a means of assaying ore samples. At least one of these reports actually predates the (1983) Natile publication. Significantly, it could be shown that other precious metals do not catalyze the same reactions, the assays are specific for gold. It is safe to say that the synthetic community was not familiar with this report.

I'll bet not. It wasn't until 1998 that a paper appeared that really got people interested, and you can see the effect on that chart. Nugent has a number of other similar examples of chemistry that appeared years before its potential was recognized. Pd-catalyzed C-N bond formation, monodentate asymmetric hydrogenation catalysts, the use of olefin metathesis in organic synthesis, non-aqueous enzyme chemistry, and many others.

So where do the black swans come into all this? Those familiar with Nasim Taleb's book
will recognize the reference.

The phrase “Black Swan event” comes from the writings of the statistician and philosopher Nassim Nicholas Taleb. The term derives from a Latin metaphor that for many centuries simply meant something that does not exist. But also implicit in the phrase is the vulnerability of any system of thought to conflicting data. The phrase's underlying logic could be undone by the observation of a single black swan.

In 1697, the Dutch explorer Willem de Vlamingh discovered black swans on the Swan River in Western Australia. Not surprisingly, the phrase underwent a metamorphosis and came to mean a perceived impossibility that might later be disproven. It is in this sense that Taleb employs it. In his view: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct an explanation for its occurrence after the fact, making it explainable and predictable.”

Taleb has documented this last point about human nature through historical and psychological evidence. His ideas remain controversial but seem to make a great deal of sense when one attempts to understand the lengthy interludes between the literature antecedents and the disruptive breakthroughs shown. . .At the very least, his ideas represent a heads up as to how we read and mentally process the chemical literature.

I have no doubt that unwarranted assumptions persist in the conventional wisdom of organic synthesis. (Indeed, to believe otherwise would suggest that disruptive break- throughs will no longer occur in the future.) The goal, it would seem, is to recognize such assumptions for what they are and to minimize the time lag between the appearance of Black Swans and the breakthroughs that follow.

One difference between Nugent's examples and Taleb's is the "extreme impact" part. I think that Taleb has in mind events in the financial industry like the real estate collapse of 2007-2008 (recommended reading here
), or the currency events that led to the wipeout of Long-Term Capital Management in 1998. The scientific literature works differently. As this paper shows, big events in organic chemistry don't come on as sudden, unexpected waves that sweep everything before them. Our swans are mute. They slip into the water so quietly that no one notices them for years, and they're often small enough that people mistake them for some other bird entirely. Thus the time lag.

How to shorten that? It'll be hard, because a lot of the dark-colored birds you see in the scientific literature aren't amazing black swans; they're crows and grackles. (And closer inspection shows that some of them are engaged in such unusual swan-like behavior because they're floating inertly on their sides). The sheer size of the literature now is another problem - interesting outliers are carried along in a flood tide of stuff that's not quite so interesting. (This paper mentions that very problem, along with a recommendation to still try to browse the literature - rather than only doing targeted searches - because otherwise you'll never see any oddities at all).

Then there's the way that we deal with such things even when we do encounter them. Nugent's recommendation is to think hard about whether you really know as much as you think you do when you try to rationalize away some odd report. (And rationalizing them away is the usual reponse). The conventional wisdom may not be as solid as it appears; you can probably put your foot through it in numerous places with a well-aimed kick. As the paper puts it: "Ultimately, the fact that something has never been done is the flimsiest of evidence that it cannot be done."

That's worth thinking about in terms of medicinal chemistry, as well as organic synthesis. Look, for example, at Rule-Of-Five type criteria. We've had a lot of discussions about these around here (those links are just some of the more recent ones), and I'll freely admit that I've been more in the camp that says "Time and money are fleeting, bias your work towards friendly chemical space". But it's for sure that there are compounds that break all kinds of rules and still work. Maybe more time and money should go into figuring out what it is about those drugs, and whether there are any general lessons we can learn about how to break the rules wisely. It's not that work in this area hasn't been done, but we still have a poor understanding of what's going on.

Comments (16) + TrackBacks (0) | Category: Chemical News | Drug Industry History | The Scientific Literature | Who Discovers and Why

August 6, 2012

Biotech Clusters

Email This Entry

Posted by Derek

How important is it to have a "anchor" company in a regional bio/pharma cluster? How do you get a thriving cluster of biotech companies, anyway? There are a lot of cities that would like the answers to these questions, not that anyone has them (although there are consultants who will be glad to convince you otherwise).

Luke Timmerman has thoughts on the subject here, pointing out that some of the more well-known biotech hubs have been losing some of their marquee names to takeovers and the like. This has to have an effect, and the question is just how big (or bad) it'll be.

Other companies, in some places, might be able to step up and fill the void, but not always. If there isn't a robust culture in an area (or not yet), then taking out the main company that's driving things might bring the whole process to a halt. If, in fact, it is a process - and that takes us back to the whole question of how these clusters get started in the first place. The biggest and most impressive share some common features (well--known research universities in the area, to pick the most obvious), but what seem to be very similar features in other locations can fail to produce similar results.

Many are the cities that have tried to grow their own Silicon Valleys and Boston/Cambridges. Overall, I'm skeptical of attempts to purposefully induce these sorts of things, and that goes both for R&D clusters as well as the various city-planning attempts to bring in young creative-class types. At best, these seem to me to be likely to missing some key variables, and at worst, it's reminiscent of South Pacific cargo cults. ("If we make this look like a happening city, then that's what it'll be!") It's not that I don't think more research hot spots would be a bad thing, of course - just the opposite. It's just that I don't know how you achieve that result.

Comments (17) + TrackBacks (0) | Category: Who Discovers and Why

June 21, 2012

Scientific Literacy: Where Do You Stop?

Email This Entry

Posted by Derek

Now here is a piece on scientific literacy that I find interesting. The author, Daniel Sarewitz, is wondering why so many people equate it with knowing facts:

We have this belief that unless a person knows that the Earth rotates around the sun and that birds evolved from dinosaurs, she or he won’t be able to exercise responsible citizenship or participate effectively in modern society. Scientists are fond of claiming that literacy in their particular area of expertise (such as climate change or genomics) is necessary so “the public can make informed judgments on public policy issues.”

Yet the idea that we can say anything useful at all about a person's competence in the world based on their rudimentary familiarity with any particular information or type of knowledge is ridiculous. Not only is such information totally disembodied from experience and thus no more than an abstraction (and an arbitrary one at that), but it also fails to live up to what science ultimately promises: to enhance one's ability to understand and act effectively in a world of one’s knowing.

This point has often troubled me. I recall Richard Feynman's attempt to reduce the key insights of physics down to a single sentence. ("If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that: all things are made of atoms-little particles that that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied") And I still can't help thinking that some basic scientific knowledge about the world is an essential part of anyone's mental furniture.

But where to stop? This is the slippery "physics for poets" problem, and I don't think it's ever been solved. Yes, everyone should know that things are made out of atoms, and that there are only a certain number of different kinds of atoms. And I'd like for people to know that living things are mostly just made out of eight or ten of those, with carbon being the most important. But at that point, are we already getting close to the borderline between knowledge and trivia? What should people know about carbon? About atomic bonds? About biomolecules? I'd like for people to know roughly what DNA is, and what proteins are, and what carbohydrates are (other than "stuff that's in food"). But in how much detail? The details multiply very, very quickly.

The same goes for any other science. A hobby of mine is astronomy, and I certainly think that everyone should know that the Earth and the other planets go around the sun, with moons that go around many of them. I'd like for them to know that the other stars are things much like our sun, and very much further away. But should people know about red giants and white dwarves and supernovas? I'd like for people to know that Jupiter is a big planet, with moons. But how many moons? Should they know the names of the Galilean satellites or not? And what good would it do them if they did?

Ah, you say, science literacy should focus not so much on the mass of facts, but on the process of doing science itself. It's a way of looking at (and learning about) the world. And I agree with that, but Sarwitz isn't letting that one off easily, either:

A more sophisticated version of science literacy that focuses not on arbitrary facts but on method or process doesn't help much, either. The canonical methods of science as taught in the classroom are powerful because they remove the phenomenon being studied from the context of the real world and isolate it in the controlled setting of the laboratory experiment. This idealized process has little if any applicability to solving the problems that people face on a daily basis, where uncertainty and indeterminacy are the rule, and effective action is based on experience and learning and accrued judgment. Textbook versions of scientific methods cannot, for example, equip a nonexpert to make an informed judgment about the validity or plausibility of technical claims made by experts.

This is overstated (I hope). The scientific technique of isolating variables is key to troubleshooting of all kinds, all the way down to problems like why the toaster oven isn't coming on. (Problem with the switch? Problem with the cord? Problem with the plug? Problem back at the circuit breaker?) And the concept of reproducibility has broad application as well. But it's true that school curricula don't always get this things across.

One of the responses to the article brings up an interesting analogy - music. There's being able to listen to music, and decide if you like it or not, or if it does anything for you. Then there's being able to read sheet music. And there's being able to play an instrument yourself, and past that, the ability to compose. When I say that I'd like for more people to know more about science, I think that I'm asking for more people to be able to the hear the music that I hear. But is that really what it means?

Comments (55) + TrackBacks (0) | Category: Who Discovers and Why

June 18, 2012

More "More Scientists" Debate

Email This Entry

Posted by Derek

My recent post here on whether the US needs a big influx of scientists and engineers has attracted some attention. Discover magazine asked to reprint it on their site, and then Slate asked if I would write a response for them expanding my thoughts on the subject, which is now up here.

It feels odd for me, as a scientist, to be taking this side of the issue. I even think that not enough people know enough science and mathematics, and would like for these subjects to be taught better than they are in schools. But there's something about the attitude that "America needs more scientists, even mediocre ones" that really doesn't sit right with me. Science, and scientists, aren't like coal. We can't be stored for later use, nor hauled around to do whatever job it is that Generic Scientists are needed to do. It's messier than that, as a look at some of the science and technology industries (like the one I work in) might illustrate.

Comments (38) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

June 6, 2012

How Not to Do Science Education

Email This Entry

Posted by Derek

Slate has one of those assume-the-conclusions articles up on science and technology education in the US. It's right there in the title: "America Needs More Scientists and Engineers".

Now, I can generally agree that America (and the world) needs more science and engineering. I'd personally like enough to realize room-temperature superconductors, commercially feasible fixation of carbon dioxide as an industrial feedstock, and both economically viable fusion power and high-efficiency solar beamed down from orbit. For starters. We most definitely need better technology and more scientific understanding to realize these things, since none of them (as far as we know) are at all impossible, and we sure don't have any of them yet.

But to automatically assume that we need lots more scientists and engineers to do that is a tempting, but illogical, conclusion. And one that my currently-unemployed readers who are scientists and engineers don't enjoy hearing about very much, I'd have to assume. I think that the initial fallacies are (1) lumping together all science education into a common substance, and (2) assuming that if you put more of that into the hopper, more good stuff will come out the other end. If I had to pick one line from the article that I disagree with the most, it would be this one:

America needs Thomas Edisons and Craig Venters, but it really needs a lot more good scientists, more competent scientists, even more mediocre scientists.

No. I hate to be the one to say it, but mediocre scientists are, in fact, in long supply. Access to them is not a rate-limiting step. Not all the unemployed science and technology folks out there are mediocre - not by a long shot (I've seen the CVs that come in) - but a lot of the mediocre ones are finding themselves unemployed, and they're searching an awful long time for new positions when that happens. Who, exactly, would be clamoring to hire a fresh horde of I-guess-they'll-do science graduates? Is that what we really need to put things over the top, technologically - more foot soldiers?

But I agree with the first part of the quoted statement, although different names might have come to my mind. My emphasis would be on "How do we get the smartest and most motivated people to go into science again?". Or perhaps "How do we educate future discoverers to live up to their potential?" I want to make sure that we don't miss the next John von Neumann or Claude Shannon, or that they don't decide to go off to the hedge fund business instead. I want to be able to find the great people who come out of obscurity, the Barbara McClintocks and Francis Cricks, and give them the chance to do what they're capable of. When someone seems to be born for a particular field, like R. B. Woodward for organic chemistry, I want them to have every chance to find their calling.

But even below that household-name level, there's a larger group of very intelligent, very inventive people who are mostly only known to those in their field. I have a list in my head right now for chemistry; so do you. These people we cannot have enough of, either - these are people who might be only a chance encounter or sudden thought away from a line of research that would lead to an uncontested Nobel Prize or billion-dollar industrial breakthrough.

To be fair, Slate may well get around to some of these thoughts; they're going to be writing about science education all month. But I wish that they hadn't gotten off on this particular foot. You've got to guard yourself against myths in this area. Here come a few of them:

1. Companies, in most cases, are not moving R&D operations overseas because they just can't find anyone here to do the jobs. They're doing that because it's cheaper that way (or appears to be; the jury's probably still out in many instances).

2. We are not, as far as I can see, facing the constant and well-known "critical shortage of scientists and engineers". There have been headlines with that phrase in them for decades, and I wish people would think about that before writing another one. Some fields may have shortages, but that's a different story entirely.

3. And that brings up another point, as mentioned above: while the earlier stages of science and math education are a common pathway, things then branch out, and how. Saying that there are so-many-thousand "science PhDs" is a pretty useless statistic, because by that point, they're scattered into all sorts of fields. A semiconductor firm will not be hiring me, for example.

There are more of these myths; examples are welcome in the comments. I'll no doubt return to this topic as more articles are published on it - it really is an important one. That's why it deserves more than "America needs more mediocre scientists". Sheesh.

Comments (53) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

April 25, 2012

Drug Company Culture: It's Not Helping

Email This Entry

Posted by Derek

I wanted to call attention to a piece by Bruce Booth over at Forbes. He starts off from the Scannell paper in Nature Reviews Drug Discovery that we were discussing here recently, but he goes on to another factor. And it's a big one: culture.

Fundamentally, I think the bulk of the last decade’s productivity decline is attributable to a culture problem. The Big Pharma culture has been homogenized, purified, sterilized, whipped, stirred, filtered, etc and lost its ability to ferment the good stuff required to innovate. This isn’t covered in most reviews of the productivity challenge facing our industry, because its nearly impossible to quantify, but it’s well known and a huge issue.

You really should read the whole thing, but I'll mention some of his main points. One of those is "The Tyranny of the Committee". You know, nothing good can ever be decided unless there are a lot of people in the room - right? And then that decision has to move to another room full of people who give it a different working-over, with lots more PowerPoint - right? And then that decision moves up to a group of higher-level people, who look at the slides again - or summaries of them - and make a collective decision. That's how it's supposed to work - uh, right?

Another is "Stagnation Through Risk Avoidance". Projects go on longer, and keep everyone busy, if the nasty issues aren't faced too quickly. And everyone has room to deflect blame when things go wrong, if plenty of work has been poured into the project, from several different areas, before the bad news hits. Most of the time, you know, some sort of bad news is waiting out there, so you want to have yourself (and your career) prepared beforehand - right? After all, several high-level committees signed off on this project. . .

And then there's "Organizational Entropy", which we've discussed around here, too. When the New, Latest, Really-Going-to-Work reorganization hits, as it does every three years or so, things slow down. They have to. And a nice big merger doesn't just slow things down, it brings everything to a juddering halt. The cumulative effect of these things can be deadly.

As Booth says, there are other factors as well. I'd add a couple to the list, myself: the tendency to think that If This Was Any Good, Someone Else Would Be Doing It (which is another way of being able to run for cover if things don't work out), and the general human sunk-cost fallacy of We've Come This Far; We Have to Get Something Out of This. But his main point stands, and has stood for many years. The research culture in many big drug companies stands in the way of getting things done. More posts on this to follow.

Comments (36) + TrackBacks (0) | Category: Drug Industry History | Life in the Drug Labs | Who Discovers and Why

February 13, 2012

Nobel Prizes in Chemistry For People Who Aren't Chemists

Email This Entry

Posted by Derek

Nobelist Roald Hoffman has directly taken on a topic that many chemists find painful: why aren't more chemistry Nobel prizes given, to, well. . .chemists?

". . .the last decade has been especially unkind to "pure" chemists, asa only four of ten Nobel awards could be classified as rewarding work comfortably ensconced in chemistry departments around the world. And five of the last ten awards have had a definite biological tinge to them.

I know that I speak from a privileged position, but I would urge my fellow chemists not to be upset."

He goes on to argue that the Nobel committee is actually pursuing a larger definition of chemistry than many chemists are, and that we should take it and run with it. Hoffmann says that the split between chemistry and biochemistry, back earlier in the 20th century, was a mistake. (And I think he's saying that if we don't watch out, we're going to make the same mistake again, all in the name of keeping the discipline pure).

We're going to run into the same problem over and over again. What if someone discovers some sort of modified graphene that's useful for mimicking photosynthesis, and possibly turning ambient carbon dioxide into a useful chemical feedstock? What if nanotechnology really does start to get off the ground, or another breakthrough is made towards room-temperature superconductors, this time containing organic molecules? What would a leap forward in battery technology be, if not chemistry? Or schemes to modify secreted proteins or antibodies to make them do useful things no one has ever seen? Are we going to tell everyone "No, no. Those are wonderful, those are great discoveries. But they're not chemistry. Chemistry is this stuff over here, that we complain about not getting prizes for".

Comments (16) + TrackBacks (0) | Category: General Scientific News | Press Coverage | Who Discovers and Why

February 9, 2012

Roger Boisjoly and the Management Hat

Email This Entry

Posted by Derek

I'd like to take a few minutes to remember someone that everyone in R&D should spare a thought for: Roger Boisjoly, If you don't know the name, you'll still likely know something about his story: he was one of the Morton Thiokol engineers who tried, unsuccessfully, to stop the Challenger space shuttle launch in 1986.

Here's more on him from NPR (and from one of their reporters who helped break the inside story of that launch at the time). Boisjoly had realized that cold weather was degrading the O-ring seals on the solid rocket boosters, and as he told NPR, when he blew the whistle:

"We all knew if the seals failed the shuttle would blow up."

Armed with the data that described that possibility, Boisjoly and his colleagues argued persistently and vigorously for hours. At first, Thiokol managers agreed with them and formally recommended a launch delay. But NASA officials on a conference call challenged that recommendation.

"I am appalled," said NASA's George Hardy, according to Boisjoly and our other source in the room. "I am appalled by your recommendation."

Another shuttle program manager, Lawrence Mulloy, didn't hide his disdain. "My God, Thiokol," he said. "When do you want me to launch — next April?"

When NASA overruled the Thiokol engineers, it was with a quote that no one who works with data, on the front lines of a project, should ever forget: "Take off your engineer hat," they told Boisjoly and the others, "and put your management hat on". Well, the people behind that recommendation managed their way to seven deaths and a spectacular setback for the US space program. As Richard Feynman said in his famous Appendix F to the Rogers Commission report, "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled".

Not even with our latest management techniques can nature be fooled, no matter how much six-sigma, 4S, and what-have-you gets deployed. Nothing else works, either. Nature does not care where you went to school, what it says on your business cards, how glossy your presentation is, or how expensive your shirt. That's one of the things I like most about it, and I think that any scientist should know what I'm talking about when I say that. The real world is the real world, and the data are the data.

But it's up to us to draw the conclusions from those numbers, and to get those conclusions across to everyone else. It may well be true, as Ed Tufte has maintained, that one of the tragedies of the Challenger launch was that the engineers involved weren't able to do a clear enough presentation of their conclusions. Update: see this account by Boisjoly himself on this point. It might not have been enough in the end; there seem to have been some people who were determined to launch the shuttle and determined to not hear anything that would interfere with that goal. We shouldn't forget this aspect of the story, though - it's incumbent on us to get our conclusions across as well as we can.

Well, then, what about Nature not caring about how slick our slide presentations are? That, to me, is the difference between "slick" and "effective". The former tries to gloss over things; the latter gets them across. If the effort you're putting into your presentation goes into keeping certain questions from being asked, then it's veered over to the slick side of the path. To get all Aristolelian about it, the means of persuasion should be heavy on the logos, the argument itself, and you should do the best job you can on that. Steer clear of the pathos, the appeal to the emotions, and you should already be in a position to have the ethos (the trustworthiness of the speaker's character) working for you, without having to make it a key part of your case.

But today, spend a moment to remember Roger Boisjoly, and everyone who's ever been in his position. And look out, be very careful indeed, if anyone ever asks you to put your management hat on.

Comments (27) + TrackBacks (0) | Category: Who Discovers and Why

February 1, 2012

Smugness as a Warning Sign

Email This Entry

Posted by Derek

At Xconomy, Luke Timmerman has words of wisdom for people in the small biotech world: "Never back smug". That's a quote from venture capitalist Bob More, and it rings true to me as well. Says TImmerman: ". . .it strikes me that life sciences has more than its share of spinmeisters, hypesters, smoke-and-mirrors actors, and worse."

Then there’s smugness, that arrogance or sense of superiority. Developing innovative new drugs or devices requires a strong ego, high IQ, stamina, an inspiring personality that attracts other people, and other things. Often, that combination spills over into smugness or arrogance. More says he watches for a lot of the same cues that his sister, a teacher, watches for. . .

I suspect that many readers will have encountered this trait (very occasionally) in their careers. There's a particular danger in the sciences, because (on the one hand) there's so much to know, that a given person does indeed have a good chance of knowing something that others don't. But on that inevitable other hand, this knowledge is set against a background of the huge, vast, pile of what we don't know - and if you keep that perspective, that knowing little smile just starts to look ridiculous.

And consider the audience - scientists, good ones, pride themselves on curiosity and being able to master new material. That means that "You don't have to know about that" or "Don't you worry about that, that's my department" (not to mention "Oh, you probably wouldn't understand") aren't going to get a good reception, not from anyone who could be of any help, anyway. Someone with that kind of attitude ends up driving away people who are smart, competent, and motivated - they won't put up with it.

Comments (8) + TrackBacks (0) | Category: Who Discovers and Why

November 21, 2011

Of Drug Research and Moneyball

Email This Entry

Posted by Derek

This piece on Michael Lewis and Billy Beane is nice to read, even if you haven't read Moneyball. (And if you haven't, consider doing so - it's not perfect, but it's well worth the time). Several thoughts occurred to me while revisiting all this, some of them actually relevant to drug discovery.

First off, a quick peaen to Bill James. I read his Baseball Abstract books every year back in the 1980s, and found them exhilarating. And that's not just because I was following baseball closely. I was in grad school, and was up to my earlobes in day-to-day scientific research for the first time, and here was someone who applied the same worldview to a sport. Baseball had long been full of slogans and sayings, folk wisdom and beliefs, and James was willing to dig through the numbers to see which of these things were true and which weren't. His willingness to point out those latter cases, and the level of evidence he brought to those takedowns, was wonderful to see. I still have a lot of James' thoughts in my head; his books may well have changed my life a bit. I was already inclined that way, but his example of fearlessly questioning Stuff That Everybody Knows really strengthened my resolve to try to do the same.

A lot of people feel that way, I've found - there are James fans all over the place, people were were influenced the same way, at the same time, by the same books. It took a while for that attitude to penetrate the sport that those books were written about, though, as that article linked to above details. And its success once it did was part of a broader trend:

Innovation hurts. After Beane began using numbers to find players, the A’s’ scouts lost their lifelong purpose. In the movie, one of them protests to Pitt: “You are discarding what scouts have done for 150 years.” That was exactly right. Similar fates had been befalling all sorts of lesser-educated American men for years, though the process is more noticeable now than it was in 2003 when Moneyball first appeared. The book, Lewis agrees, is partly “about the intellectualisation of a previously not intellectual job. This has happened in other spheres of American life. I think the reason I saw the story so quickly is, this is exactly what happened on Wall Street while I was there. . .”

(That would be during the time of Liar's Poker, which still a fun and interesting book to read, although it describes a time that's much longer ago than the calendar would indicate). And I think that the point is a good one. I'd add that the process has also been driven by the availability of computing power. When you had to bash the numbers by hand, with a pencil, there was only so much you could do. Spreadsheets and statistical software, graphing programs and databases - these have allowed people to extract meaning from numbers without having to haul up every shovelful by hand. And it's given power to those people who are adept at extracting that meaning (or at least, to the people willing to act on their conclusions).

The article quotes Beane as saying that Lewis understood what he was doing within minutes: "You’re arbitraging the mispricing of baseball players". And I don't think that it can be put in fewer words: that's exactly what someone with a Wall Street background would make of it, and it's exactly right. Now to our own business. Can you think of an industry whose assets are mispriced more grievously, and more routinely, than drug research?

Think about it. All those preclinical programs that never quite work out. All those targets that don't turn out to be the right target when you get to Phase II. All those compounds that blow up in Phase III because of unexpected toxicity. By working on them, by putting time and effort and money into them, we're pricing them. And too much of the time, we're getting that price wrong, terribly wrong.

That's what struck me when I read Moneyball several years ago. The problem is, drug research is not baseball, circa 1985. We're already full of statisticians, computational wizards, and sharp-eyed people who are used to challenging the evidence and weighing the facts. And even with that, this is the state we're in. The history of drug research is one attempt after another to find some edge, some understanding, that can be used to correct that constant mispricing of our assets. What to do? If the salt has lost its savour, wherewith shall it be salted?

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

November 16, 2011

Ray Firestone's Take On Pharma's Plight

Email This Entry

Posted by Derek

And while I'm linking out to other opinion pieces, Ray Firestone has a cri du couer in Nature Reviews Drug Discovery, looking back over his decades in the business. Regular readers of this blog (or of Ray Firestone!) will recognize all the factors he talks about, for sure. He talks about creativity (and its reception at some large companies), the size of an organization and its relation to productivity, and what's been driving a lot of decisions over the last ten or twenty years. To give you a sample:

if size is detrimental to an innovative research culture, mergers between large companies should make things worse — and they do. They have a strong negative personal impact on researchers and, consequently, the innovative research environment. For example, the merger of Bristol-Myers with Squibb in 1989, which I witnessed, was a scene of power grabs and disintegrating morale. Researchers who could get a good offer left the company, and the positions of those who remained were often decided by favouritism rather than talent. Productivity fell so low that an outside firm was hired to find out why. Of course, everyone knew what was wrong but few — if any — had the nerve to say it.

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

October 31, 2011

"You Guys Don’t Do Innovation. The iPad. That’s Innovative"

Email This Entry

Posted by Derek

Thoughts from Matthew Herper at Forbes about Steve Jobs, modern medicine, what innovation means, and why it can be so hard in some fields. This is relevant to this post and its precursors.

Comments (41) + TrackBacks (0) | Category: Drug Development | Who Discovers and Why

September 7, 2011

Hard, Hard Work in the Lab

Email This Entry

Posted by Derek

Nature News has a good piece on the 24-hour laboratory. And they're not talking about automation; they're talking about grad students and post-docs staying around all night long. Interestingly, they're focusing on a specific lab to get the details:

But these members of neurosurgeon Alfredo Quiñones-Hinojosa's laboratory are accustomed to being the last out of the building. In a lab where the boss calls you at 6 a.m., schedules Friday evening lab meetings that can stretch past 10 p.m., and routinely expects you to work over Christmas, sticking it out until midnight on a holiday weekend is nothing unusual.

Many labs are renowned for their intense work ethic and long hours. When I set out to profile such a laboratory, I wanted to find out who is drawn to these environments, what it is really like to work there and whether long hours lead to more or better science. I approached eleven laboratories with reputations for being extremely hard-working. Ten principal investigators turned me down, some expressing a fear of being seen as 'slave-drivers'.

Number eleven — Quiñones-Hinojosa — had no such qualms. His work ethic is no secret. . (He) is gregarious and charming, with an infectious energy and a habit of advertising his humility. But he also knows how intimidating he can be to the people who work for him, and he's not afraid to capitalize on that. In 2007, just two years after he started at Hopkins, he rounded a corner in the cafeteria and saw his lab members sitting at a table, talking and laughing. When they caught sight of him, he says, they stopped, stood up, and went straight back to the lab.

I think that most of us in chemistry have either worked for, or worked near, someone who ran their lab like that. The article makes a point of showing how this professor tries to select people for his group who either like or will put up with it - and as long as everyone knows the score going in, I suppose that I don't have a problem with it. No one's forcing you to go work for Quiñones-Hinojosa, after all (but if you do, he'll certainly force you to work once you're there!) I would personally not make the choice to enter a lab like that one, but others might regard it as a worthwhile trade.

But there's the larger question of whether science has to (or even should) be done that way. As the article goes on to say:

But not everyone agrees that more hours yield more results. Dean Simonton, a psychology researcher at the University of California, Davis, who has studied scientific creativity, says that the pressure for publications, grants and tenure may have created a single-minded, "monastic" culture in science. But some research suggests that highly creative scientists tend to have broader interests and more hobbies than their less creative colleagues, he says. Chemist Stephen Buchwald of the Massachusetts Institute of Technology urges the members of his lab to take a month's holiday every year, and not to think about work when they're gone. "The fact is, I want people to be able to think," he says. "If they're completely beaten down, they're not going to be very creative."

My guess is that we're looking at two different kinds of science here. If you're in an area where there's a huge amount of data to be gathered and processed (as there is with the tumor samples in Quiñones-Hinojosa's lab), then the harder you crank, the more results come out the other end. They know what they have to do, and they've decided to do it twenty hours a day. On the other hand, you can't put in those hours thinking up the next revolutionary idea. "Be creative! Now! And don't stop being creative until midnight! I want to see a new electrophilic bond-forming reaction idea before you hit that door!" It doesn't work.

Robert Root-Bernstein's Discovering goes into some of these questions. It's an odd book that I recommend once in a while (much of it is in the form of fictional conversation), but it brings together a lot of information on scientific discovery and creativity that's very hard to find. A quote in it from J. J. Thomson seems appropriate here:

"If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible results being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate, different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want this kind of research, but if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is pay him for doing something else and give him enough leisure to do research for the love of it."

So there's room for both kinds of work (or should be). Just make sure that you know if you're getting into a pressure cooker beforehand, and that that's what you want or need to do. And if you're going to try to big creative route, you'd better have some interesting ideas to start from and some mental horsepower to work with, or you could spend the rest of your career wandering around in circles. . .

Comments (57) + TrackBacks (0) | Category: Who Discovers and Why

August 10, 2011

The Economics of the Drug Industry: Big Can't Be Big Enough?

Email This Entry

Posted by Derek

I wanted to extract and annotate a comment of Bernard Munos' from the most recent post discussing his thoughts on the industry. Like many of the ones in that thread, there's a lot inside it to think about:

(Arthur) De Vany has shown that the movie industry has developed clever tools (e.g., adaptive contracts) to deal with (portfolio uncertainty). That may come to pharma too, and in fact he is working on creating such tools. In the meantime, one can build on the work of Frank Scherer at Harvard, and Dietmar Harhoff. (Andrew Lo at MIT is also working on this). Using simulations, they have shown that traditional portfolio management (as practiced in pharma) does achieve a degree of risk mitigation, but far too little to be effective. In other words, because of the extremely skewed probability distributions in our industry, the residual variance, after you've done portfolio management, is large enough to put you out of business if you hit a dry spell. That's why big pharma is looking down patent cliffs that portfolio management was meant to avoid. Scherer's work also shows that the broader the pipeline, the better the risk mitigation. So we know directionally where to go, but we need more work to estimate the breadth of the pipeline that is needed to get risk under control. Pfizer's example, however, gives us a clue. With nearly $9 billion in R&D spend, and a massive pipeline, they were unable to avoid patent cliffs. If they could not do it, chances are that no single pharma company can create internally a pipeline that is broad enough to tame risk. . .

That's a disturbing thought, but it's likely to be correct. Pfizer has not, I think it's safe to say, achieved any sort of self-sustaining "take-off" into a world where it discovers enough new drugs to keep its own operations running steadily. And this, I think, was the implicit promise in all that merger and acquisition growth it undertook. Just a bit bigger, just a bit broader, and those wonderful synergies and economies of scale would kick in and make everything work out. No, we're not quite big enough yet to be sure that we're going to have a steady portfolio of big, profitable drugs, but this next big acquisition? Sure to do the trick. We're so close.

And this doesn't even take into account the problems with returns on research not scaling with size (due to the penalties of bureaucracy and merger uncertainty, among other factors). Those have just made the problems with the strategy apparent more quickly - but even if Pfizer's growth had gone according to plan, and they'd turned into that great big (but still nimble and innovative!) company of their dreams, it might well still not have been enough. So here's the worrisome thesis: What size drug portfolio is big enough to avoid too high a chance of ruin? Bigger than any of us have.

Here's de Vany's book on the economics of Hollywood, for those who are interested. That analogy has been made many times, and there's a lot to it. Still, there are some key divergences: for one thing, movies are more of a discretionary item than pharmaceuticals are (you'd think). People have a much different attitude towards their physical well-being than they have towards their entertainment options. Then again, movies don't have to pass the FDA; the customers get to find out whether or not they're efficacious after they've paid their money.

On the other hand, copyright lasts a lot longer than a patent does (although it's a lot easier along the way to pirate a movie than it is to pirate a drug). And classic movies, as emotional and aesthetic experiences, don't get superseded in quite the same way that classic pharmaceuticals do. Line extension is much easier in the movie business, where people actually look forward to some of the sequels. Then there's all the ancillary merchandise that a blockbuster summer movie can spin off - no one's making Lipitor collectibles (and if I'm wrong about that, I'd prefer not to know).

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

August 5, 2011

Bernard Munos Rides Again

Email This Entry

Posted by Derek

I've been meaning to link to Matthew Herper's piece on Bernard Munos and his ideas on what's wrong with the drug business. Readers will recall several long discussions here about Munos and his published thoughts (Parts one, two, three and four). A take-home message:

So how can companies avoid tossing away billions on medicines that won’t work? By picking better targets. Munos says the companies that have done best made very big bets in untrammeled areas of pharmacology. . .Munos also showed that mergers—endemic in the industry—don’t fix productivity and may actually hurt it. . . What correlated most with the number of new drugs approved was the total number of companies in the industry. More companies, more successful drugs.

I should note that the last time I saw Munos, he was emphasizing that these big bets need to be in areas where you can get a solid answer in the clinic in the shortest amount of time possible - otherwise, you're really setting yourself up with too much risk. Alzheimer's, for example, is a disease that he was advising that drug developers basically stay away from: tricky unanswered medical questions, tough drug development problems, followed up by big huge long expensive clinical trials. If you're going to jump into a wild, untamed medical area (as he says you should), then pick one where you don't have to spend years in the clinic. (And yes, this would seem to mean a focus on an awful lot of orphan diseases, the way I look at it).

But, as the article goes on to say, the next thought after all this is: why do your researchers need to be in the same building? Or the same site? Or in the same company? Why not spin out the various areas and programs as much as possible, so that as many new ideas get tried out as can be tried? One way to interpret that is "Outsource everything!" which is where a lot of people jump off the bus. But he's not thinking in terms of "Keep lots of central control and make other people do all your grunt work". His take is more radical:

(Munos) points to the Pentagon’s Defense Advanced Research Projects Agency, the innovation engine of the military, which developed GPS, night vision and biosensors with a staff of only 140 people—and vast imagination. What if drug companies acted that way? What areas of medicine might be revolutionized?

DARPA is a very interesting case, which a lot of people have sought to emulate. From what I know of them, their success has indeed been through funding - lightly funding - an awful lot of ideas, and basically giving them just enough money to try to prove their worth before doling out any more. They have not been afraid of going after a lot of things that might be considered "out there", which is to their credit. But neither have they been charged with making money, much less reporting earnings quarterly. I don't really know what the intersection of DARPA and a publicly traded company might look like (the old Bell Labs?), or if that's possible today. If it isn't, so much the worse for us, most likely.

Comments (114) + TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Clinical Trials | Drug Development | Drug Industry History | Who Discovers and Why

August 1, 2011

Chinese Research: Not Quite the Juggernaut?

Email This Entry

Posted by Derek

A perennial topic around here has been the state of scientific research in China (and other up-and-coming nations). There's no doubt that the number of scientific publications from China has been increasing (be sure to read the comments to that post; there's more to it than I made of it). But many of these papers, on closer inspection, are junk, and are published in junk journals of no impact whatsoever. Mind you, that's not an exclusively Chinese problem - Sturgeon's Law is hard to get away from, and there's a lot of mediocre (and worse than mediocre) stuff coming out of every country's scientific enterprise.

But what about patents? The last couple of years have seen many people predicting that China would soon be leading the world in patent applications as well, which can be the occasion for pride or hand-wringing, depending on your own orientation. But there's a third response: derision. And that's what Anil Gupta and Haiyan Wang provide in the Wall Street Journal. They think that most of these filings are junk:

But more than 95% of the Chinese applications were filed domestically with the State Intellectual Property Office—and the vast majority cover "innovations" that make only tiny changes on existing designs. A better measure is to look at innovations that are recognized outside China—at patent filings or grants to China-origin inventions by the world's leading patent offices, the U.S., the EU and Japan. On this score, China is way behind.

The most compelling evidence is the count of "triadic" patent filings or grants, where an application is filed with or patent granted by all three offices for the same innovation. According to the Organization for Economic Cooperation and Development, in 2008, the most recent year for which data are available, there were only 473 triadic patent filings from China versus 14,399 from the U.S., 14,525 from Europe, and 13,446 from Japan.

Starkly put, in 2010 China accounted for 20% of the world's population, 9% of the world's GDP, 12% of the world's R&D expenditure, but only 1% of the patent filings with or patents granted by any of the leading patent offices outside China. Further, half of the China-origin patents were granted to subsidiaries of foreign multinationals. . .

The authors are perfectly willing to admit that this probably will change with time. But time can make things worse, too: as this editorial in Science last year made clear, the funding of research in China has some real problems. The authors of that piece are professors at two large Chinese universities, and would presumably know what they're talking about. For the biggest grants, they say:

. . .the key is the application guidelines that are issued each year to specify research areas and projects. Their ostensible purpose is to outline “national needs.” But the guidelines are often so narrowly described that they leave little doubt that the “needs” are anything but national; instead, the intended recipients are obvious. Committees appointed by bureaucrats in the funding agencies determine these annual guidelines. For obvious reasons, the chairs of the committees often listen to and usually cooperate with the bureaucrats. “Expert opinions” simply reflect a mutual understanding between a very small group of bureaucrats and their favorite scientists. This top-down approach stifles innovation and makes clear to everyone that the connections with bureaucrats and a few powerful scientists are paramount. . .

Given time, this culture could be changed. Or it could just become more entrenched as the amounts of money become larger and larger and the stakes become higher. China could end up as the biggest scientific and technological powerhouse the world has ever seen - or it could end up never living up to its potential and wasting vast resources on cargo-cult theatrics. It's way too early to say. But if many of those Chinese patents are just being written because someone's figured out that the way to get money and prestige is to file patents - never mind if they're good for anything - then that's not a good sign.

Comments (31) + TrackBacks (0) | Category: Patents and IP | The Scientific Literature | Who Discovers and Why

July 22, 2011

Right Up Next to Academia

Email This Entry

Posted by Derek

Here's one of Pfizer's get-close-to-academia research centers, being established near UCSF. The idea is that you not only want to do deals with academic research centers (and associated small biotechs), you also want to be physically present with them:

"Proximity leads to progress; this promises to be a very strong liaison," said Dr. Warner Greene, director of virology and immunology research at Gladstone Institutes, a basic-science research nonprofit at Mission Bay that will sublease space to Pfizer. "There is a valley of death for many basic-science discoveries that have significant promise because they are not far enough advanced to be of interest to a biotech or pharmaceutical company. By forming closer relationships between Pfizer and biotech companies, I think more creative solutions can be had for moving research down the pipeline."

Now, I would like to believe that this is true, but what I'd like to believe doesn't necessarily correspond to reality. I do think that (for various reasons) it will hurt your small biopharma company's chances if you establish it in, say, Sioux Falls, Yakima, or Louisville. So being "out of the loop" can hurt, but does it follow that being ever more tightly in it helps? Does anyone have evidence that speaks to this?

Comments (28) + TrackBacks (0) | Category: Who Discovers and Why

May 18, 2011

Funding People, Not Projects?

Email This Entry

Posted by Derek

Tim Harford (author of The Undercover Economist and The Logic of Life has a new book coming out, called Adapt. It's about success and failure in various kinds of projects, and excerpts from it have been running over at Slate. The first installment was a look at the development (messy and by no means inevitable) of the Spitfire before World War II (I'd also add the de Havilland Mosquito as another example of a great plane developed through sheer individual persistence). And the second one is on biomedical research, which takes it right into the usual subject matter around here:

In 1980, Mario Capecchi applied for a grant from the U.S. National Institutes of Health. . .Capecchi described three separate projects. Two of them were solid stuff with a clear track record and a step-by-step account of the project deliverables. Success was almost assured.

The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse's DNA. It is hard to overstate how ambitious this was, especially back in 1980. . .The NIH decided that Capecchi's plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects. . .

What did Capecchi do? He took the NIH's money, and, ignoring their admonitions, he poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn't been able to show strong enough initial results in the three-to-five-year time scale demanded by the NIH, they would have cut off his funding. Without their seal of approval, he might have found it hard to get funding from elsewhere. His career would have been severely set back, his research assistants looking for other work. His laboratory might not have survived.

Well, it worked out. But it really did take a lot of nerve; Harford's right about that. He's not bashing the NIH, though - as he goes on to say, their granting system is pretty similar to what any reasonable gathering of responsible people would come up with. But:

The NIH's expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can't-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks—one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organization, especially one funded by taxpayers. But it takes too few risks. It isn't right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don't want to take a chance.

Harford goes on to praise the Howard Hughes Medical Institute's investigator program, which is more explicitly aimed at funding innovative people and letting them try things, rather than the "Tell us what you're going to discover" style of many other granting agencies. Funding research in this style has been advocated by many people over the years, including a number of scientific heroes of mine, and the Hughes approach seems to be catching on.

It isn't straightforward. You want to make sure that you're just not just adding to the Matthew Effect by picking a bunch of famous names and handing them the cash. (That's the debate in the UK after a recent proposal to emulate the HHMI model). No, you're better off finding people with good ideas and the nerve to pursue them, whether they've made a name for themselves yet or not, but that's not an easy task.

Still, I'm very happy that these changes in academic funding are in the air. I worry that our system is sclerotic and less able to produce innovations than it should be, and shaking it up a bit is just what's needed.

Comments (21) + TrackBacks (0) | Category: Who Discovers and Why

May 5, 2011

Translation Needed

Email This Entry

Posted by Derek

The "Opinionator" blog at the New York Times is trying here, but there's something not quite right. David Bornstein, in fact, gets off on the wrong foot entirely with this opening:

Consider two numbers: 800,000 and 21.

The first is the number of medical research papers that were published in 2008. The second is the number of new drugs that were approved by the Food and Drug Administration last year.

That’s an ocean of research producing treatments by the drop. Indeed, in recent decades, one of the most sobering realities in the field of biomedical research has been the fact that, despite significant increases in funding — as well as extraordinary advances in things like genomics, computerized molecular modeling, and drug screening and synthesization — the number of new treatments for illnesses that make it to market each year has flatlined at historically low levels.

Now, "synthesization" appears to be a new word, and it's not one that we've been waiting for, either. "Synthesis" is what we call it in the labs; I've never heard of synthesization in my life, and hope never to again. That's a minor point, perhaps, but it's an immediate giveaway that this piece is being written by someone who knows nothing about their chosen topic. How far would you keep reading an article that talked about mental health and psychosization? A sermon on the Book of Genesization? Right.

The point about drug approvals being flat is correct, of course, although not exactly news by now, But comparing it to the total number of medical papers published that same year is bizarre. Many of these papers have no bearing on the discovery of drugs, not even potentially. Even if you wanted to make such a comparison, you'd want to run the clock back at least twelve years to find the papers that might have influenced the current crop of drug approvals. All in all, it's a lurching start.

Things pick up a bit when Bornstein starts focusing on the Myelin Repair Foundation as an example of current ways to change drug discovery. (Perhaps it's just because he starts relaying information directly that he's been given?) The MRF is an interesting organization that's obviously working on a very tough problem - having tried to make neurons grow and repair themselves more than once in my career, I can testify that it's most definitely nontrivial. And the article tries to make a big distinction between they way that they're funding research as opposed to the "traditional NIH way".

The primary mechanism for getting funding for biomedical research is to write a grant proposal and submit it to the N.I.H. or a large foundation. Proposals are reviewed by scientists, who decide which ones are most likely to produce novel discoveries. Only a fraction get funded and there is little encouragement for investigators to coordinate research with other laboratories. Discoveries are kept quiet until they are published in peer-reviewed journals, so other scientists learn about them only after a delay of years. In theory, once findings are published, they will be picked up by pharmaceutical companies. In practice, that doesn’t happen nearly as often as it should.

Now we're back to what I'm starting to think of as the "translational research fallacy". I wrote about that here; it's the belief that there are all kinds of great ideas and leads in drug discovery that are sitting on the shelf, because no one in the industry has bothered to take a look. And while it's true that some things do slip past, I'm really not sure that I can buy into this whole worldview. My belief is that many of these things are not as immediately actionable as their academic discoverers believe them to be, for one thing. (And as for the ones that clearly are, those are worth starting a company around, right?) There's also the problem that not all of these discoveries can even be reproduced.

Bornstein's article does get it right about this topic, though:

What’s missing? For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.

“Pure science is what you’re rewarded for,” notes Dr. Barres. “That’s what you get promoted for. That’s what they give the Nobel Prizes for. And yet developing a drug is a hundred times harder than getting a Nobel Prize. . .

That kind of research is what a lot of us spend all our days doing, and there's plenty of work to fill them. As for developing a drug being harder than getting a Nobel Prize, well, apples and oranges, but there's something to it, still. The drug will cost you a lot more money along the way, but with the potential of making a lot more at the end. Bornstein's article goes off the rails again, though, when he says that companies are reluctant to go into this kind of work when someone else owns the IP rights. That's technically true, but overall, the Bayh-Dole Act on commercialization of academic research (despite complications) has brought many more discoveries to light than it's hindered, I'd say. And he's also off base about how this is the reason that drug companies make "me too" compounds. No, it's not because we don't have enough ideas to work on, unfortunately. It's because most of them (and more over the years) don't go anywhere.

Bornstein's going to do a follow-up piece focusing more on the Myelin Repair people, so I'll revisit the topic then. What I'm seeing so far is an earnest, well-meaning attempt to figure out what's going on with drug discovery - but it's not a topic that admits of many easy answers. That's a problem for journalists, and a problem for those of us who do it, too.

Comments (26) + TrackBacks (0) | Category: "Me Too" Drugs | Academia (vs. Industry) | Drug Development | Who Discovers and Why

April 13, 2011

The Fox's Lament

Email This Entry

Posted by Derek

That hedgehog/fox distinction reminds me of my own graduate school experience. I'm a natural fox myself; I've always had a lot of interests (scientifically and otherwise). So a constant diet of my PhD project got to be a strain after a while. I was doing a total synthesis of a natural product, and for that last couple of years I was the only person on it. So it was me or nothing; if I didn't set up some reactions, no reactions got run.

And I don't mind admitting that I got thoroughly sick of my synthesis and my molecule by the time I was done with it. It really went against my nature to come in and beat on the same thing for that length of time, again and again. I kept starting unrelated things, all of which seemed much more interesting, and then having to kill them off because I knew that they were prolonging my time to the degree. Keep in mind that most of my time was, necessarily, spent making starting material and dragging it up the mountainside. I only spent comparatively brief intervals working up at the frontier of my synthesis, so (outside of any side projects) my time was divided between drudgery and fear.

My doubts about the utility of the whole effort didn't help, I'm sure. But since coming to industry, I've happily worked on many projects whose prospects I was none too sure of. At least in those cases, though, you know that it's being done in a good cause (Alzheimer's, cancer, etc.) - it's just that you may worry that your particular approach has a very low chance of working. In my total synthesis days, I wasn't too sanguine about the approach, and by the end, I wasn't so sure that it was in a good cause, either. Except the cause of getting a degree and getting the heck out of grad school, naturally. That one I could really put my back into. As I used to say, "The world does not need another synthesis of a macrolide antibiotic. But I do."

Comments (9) + TrackBacks (0) | Category: Graduate School | Who Discovers and Why

Hedgehogs and Foxes Holding Erlenmeyer Flasks

Email This Entry

Posted by Derek

Over at The Curious Wavefunction, there's an interesting post on Isaiah Berlin's famous hedgehog/fox distinction (which goes back a long way) and how it applies to chemistry. Wavefunction makes the case, which I hadn't thought through in such detail, that chemistry has for a long time been a field for foxes. That is, our famous names tend to be people who jump around from area to area as their interests take them, rather than people who spend their careers digging into one particular problem.

At the outset, one thing seems clear: chemistry is much more of a fox's game than a hedgehog's. This is in contrast to theoretical physics or mathematics which have sported many spectacular hedgehogs. It's not that deep thinking hedgehogs are not valuable in chemistry. It's just that diversity in chemistry is too important to be left to hedgehogs alone. In chemistry more than in physics or math, differences and details matter. Unlike mathematics, where a hedgehog like Andrew Wiles spends almost his entire lifetime wrestling with Fermat's Last Theorem, chemistry affords few opportunities for solving single, narrowly defined problems through one approach, technique or idea. Chemists intrinsically revel in exploring a diverse and sometimes treacherous hodgepodge of rigorous mathematical analysis, empirical fact-stitching, back of the envelope calculations and heuristic modeling. These are activities ideally suited to foxes' temperament. One can say something similar about biologists.

I think he's right, and I think that that's rarely been more true than now. Whitesides, Schreiber, Sharpless. . .start listing big names and you get a list of foxes. As it did for Wavefunction, the most recent hedgehog that springs to my mind in organic chemistry was H. C. Brown, although if Buchwald continues to work on metal-catalyzed amine couplings for another forty years he could come close. Am I missing anyone? Nominations welcome in the comments.

Comments (26) + TrackBacks (0) | Category: Who Discovers and Why

March 4, 2011

Science: Good For Anything Else?

Email This Entry

Posted by Derek

One of the side topics that's come up around here recently is the value of a scientific background in other jobs (and for life in general). I've thought about that for some time. Growing up, I was always interesting in science, and I was always experimenting with things. I went through cycles of messing around in my spare time with the microscope, the telescope, chemistry experiments, electricity and radio, and back around again. I wasn't all that comprehensive and rigorous about any of it, but I think I did get the basic ideas of a scientist's world view.

Those, to me, are: (1) the natural world is independent of human thought. Your beliefs may be of interest to you, but the physical world is indifferent to them. (2) The natural world has rules. They may not be very clear, and they may be wildly complex, but there are rules, and they can be potentially figured out. (3) The way to figure them out, if you're so inclined, is to ask questions of the world in an organized fashion. These can be observations (in which case, the question is "I wonder what's there and what it looks like?"), or experiments ("I wonder what happens if I do this?"). And (4), since the world is so complex, you'd better make your questions as well-thought-out as possible. Try to identify all the variables you can, only mess with one of them at a time if at all possible, and value reproducibility very highly.

It's surprising, when you look at the record, to find out how little this view of the world has held sway over human history. There were various well-known outbreaks of such thinking in the past, but it's really only been a continuous effort in the last few centuries, and not everywhere in the world, by any means. (If you're interested in seeing just what a profound change has resulted in human affairs, I can recommend A Farewell to Alms. The results, for better or worse, we see around us, not least of which is the keyboard I'm using to type these thoughts, and the network that I'm going to send them out over in a few minutes.

So in one respect, a scientific outlook must be worth something, since it's the backdrop for the entire modern world. But it's possible, more than possible, to live in it without being aware of things in that way. I think that for any kind of work that requires brainpower and adaptability, a scientific background should come in handy. But how handy? That's my question for today. I know what I'd like the answer to be - but see that first principle above. The world doesn't have to give you the answers you like, or even care if you like one at all.

For some possible background, see the recent question "What scientific concept would improve everybody's cognitive toolkit?". I was invited to contribute to this one as well, but wasn't able to put my thoughts in a coherent enough form.

Update: fixed the numbering of the points. Yessireebob, I'm a scientist, all right.

Comments (52) + TrackBacks (0) | Category: Who Discovers and Why

November 9, 2010

Where Drugs Come From: By Country

Email This Entry

Posted by Derek

The same paper I was summarizing the other day has some interesting data on the 1998-2007 drug approvals, broken down by country and region of origin. The first thing to note is that the distribution by country tracks, quite closely, the corresponding share of the worldwide drug market. The US discovered nearly half the drugs approved during that period, and accounts for roughly that amount of the market, for example. But there are two big exceptions: the UK and Switzerland, which both outperform for their size.

In case you're wondering, the league tables look like this: the US leads in the discovery of approved drugs, by a wide margin (118 out of the 252 drugs). Then Japan, the UK and Germany are about equal, in the low 20s each. Switzerland is in next at 13, France at 12, and then the rest of Europe put together adds up to 29. Canada and Australia put together add up to nearly 7, and the entire rest of the world (including China and India) is about 6.5, with most of that being Israel.

But while the US may be producing the number of drugs you'd expect, a closer look shows that it's still a real outlier in several respects. The biggest one, to my mind, comes when you use that criterion for innovative structures or mechanisms versus extensions of what's already been worked on, as mentioned in the last post. Looking at it that way, almost all the major drug-discovering countries in the world were tilted towards less innovative medicines. The only exceptions are Switzerland, Canada and Australia, and (very much so) the US. The UK comes close, running nearly 50/50. Germany and Japan, though, especially stand out as the kings of follow-ons and me-toos, and the combined rest-of-Europe category is nearly as unbalanced.

What about that unmet-medical-need categorization? Looking at which drugs were submitted here in the US for priority review by the FDA (the proxy used across this whole analysis), again, the US-based drugs are outliers, with more priority reviews than not. Only in the smaller contributions from Australia and Canada do you see that, although Switzerland is nearly even. But in both these breakdowns (structure/mechanism and medical need) it's the biotech companies that appear to have taken the lead.

And here's the last outlier that appears to tie all these together: in almost every country that discovered new drugs during that ten-year period, the great majority came from pharma companies. The only exception is the US: 60% of our drugs have the fingerprints of biotech companies on them, either alone or from university-derived drug candidates. In very few other countries do biotech-derived drugs make much of a showing at all.

These trends show up in sales as well. Only in the US, UK, Switzerland, and Australia did the per-year-sales of novel therapies exceed the sales of the follow-ons. Germany and Japan tend to discover drugs with higher sales than average, but (as mentioned above) these are almost entirely followers of some sort.

Taken together, it appears that the US biotech industry has been the main driver of innovative drugs over the past ten years. I don't want to belittle the follow-on compounds, because they are useful. (As pointed out here before, it's hard for one of those compounds to be successful unless it really represents some sort of improvement over what's already available). At the same time, though, we can't run the whole industry by making better and better versions of what we already know.

And the contributions of universities - especially those in the US - has been strong, too. While university-derived drugs are a minority, they tend to be more innovative, probably because of their origins in basic research. There's no academic magic involved: very few, if any, universities try deliberately to run a profitable drug-discovery business - and if any start to, I confidently predict that we'll see more follow-on drugs from them as well.

Discussing the reasons for all this is another post in itself. But whatever you might think about the idea of American exceptionalism, it's alive in drug discovery.

Comments (34) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

November 5, 2010

Peer Review's Problems

Email This Entry

Posted by Derek

Over at Ars Technica, here's an excellent look at the peer review process, which I last spoke about here. The author, Chris Lee, rightly points out that we ask it to do several different things, and it's not equally good at all of them.

His biggest problem is with the evaluation of research proposals for grants, and that has indeed been a problem for many years. Reviewing a paper, where you have to evaluate things that other people have done, can be hard enough. But evaluating what people hope to be able to do is much harder:

. . .Reviewers are asked to evaluate proposed methods, but, given that the authors themselves don't yet know if the methodology will work as described, how objective can they be? Unless the authors are totally incompetent and are proposing to use a method that is known not to work in the area they wish to use it, the reviewer cannot know what will happen.

As usual, there is no guarantee that the reviewer is more of an expert in the area than the authors. In fact, it's more often the case that they're not, so whose judgement should be trusted? There is just no way to tell a good researcher combined with incompetent peer review from an incompetent researcher and good peer review.

Reviewers are also asked to judge the significance of the proposed research. But wait—if peer review fails to consistently identify papers that are of significance when the results are in, what chance does it have of identifying significant contributions that haven't yet been made? Yeah, get out your dice. . .

And as he goes on to point out, the consequences of getting a grant proposal reviewed poorly are much worse than the ones from getting a paper's review messed up. These are both immediate (for the researcher involved) and systemic:

There is also a more insidious problem associated with peer review of grant applications. The evaluation of grant proposals is a reward-and-punishment system, but it doesn't systematically reward good proposals or good researchers, and it doesn't systematically reject bad proposals or punish poor researchers. Despite this, researchers are wont to treat it as if it was systematic and invest more time seeking the rewards than they do in performing active research, which is ostensibly where their talents lie.

Effectively, in trying to be objective and screen for the very best proposals, we waste a lot of time and fail to screen out bad proposals. This leads to a lot cynicism and, although I am often accused of being cynical, I don't believe it is a healthy attitude in research.

I fortunately haven't ever had to deal with this process, having spent my scientific career in industry, but we have our own problems with figuring out which projects to advance and why. Anyone who's interested in peer review, though, should know about the issues that Lee is bringing up. Well worth a read.

Comments (22) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

November 4, 2010

Where Drugs Come From: The Numbers

Email This Entry

Posted by Derek

We can now answer the question: "Where do new drugs come from?". Well, we can answer it for the period from 1998 on, at any rate. A new paper in Nature Reviews Drug Discovery takes on all 252 drugs approved by the FDA from then through 2007, and traces each of them back to their origins. What's more, each drug is evaluated by how much unmet medical need it was addressed to and how scientifically innovative it was. Clearly, there's going to be room for some argument in any study of this sort, but I'm very glad to have it, nonetheless. Credit where credit's due: who's been discovering the most drugs, and who's been discovering the best ones?

First, the raw numbers. In the 1997-2005 period, the 252 drugs break down as follows. Note that some drugs have been split up, with partial credit being assigned to more than one category. Overall, we have:

58% from pharmaceutical companies.
18% from biotech companies..
16% from universities, transferred to biotech.
8% from universities, transferred to pharma.

That sounds about right to me. And finally, I have some hard numbers to point to when I next run into someone who tries to tell me that all drugs are found with NIH grants, and that drug companies hardly do any research. (I know that this sounds like the most ridiculous strawman, but believe me, there are people - who regard themselves as intelligent and informed - who believe this passionately, in nearly those exact words). But fear not, this isn't going to be a relentless pharma-is-great post, because it's certainly not a pharma-is-great paper. Read on. . .

Now to the qualitative rankings. The author used FDA priority reviews as a proxy for unmet medical need, but the scientific innovation rating was done basically by hand, evaluating both a drug's mechanism of action and how much its structure differed from what had come before. Just under half (123) of the drugs during this period were in for priority review, and of those, we have:

46% from pharmaceutical companies.
30% from biotech companies.
23% from universities (transferred to either biotech or pharma).

That shows the biotech- and university-derived drugs outperforming when you look at things this way, which again seems about right to me. Note that this means that the majority of biotech submissions are priority reviews, and the majority of pharma drugs aren't. And now to innovation - 118 of the drugs during this period were considered to have scientific novelty (46%), and of those:

44% were from pharmaceutical companies.
25% were from biotech companies, and
31% were from universities (transferred to either biotech or pharma).

The university-derived drugs clearly outperform in this category. What this also means is that 65% of the pharma-derived drugs get classed as "not innovative", and that's worth another post all its own. Now, not all the university-derived drugs showed up as novel, either - but when you look closer, it turns out that the majority of the novel stuff from universities gets taken up by biotech companies rather than by pharma.

So why does this happen? This paper doesn't put it one word, but I will: money. It turns out that the novel therapies are disproportionately orphan drugs (which makes sense), and although there are a few orphan-drug blockbusters, most of them have lower sales. And indeed, the university-to-pharma drugs tend to have much higher sales than the university-to-biotech ones. The bigger drug companies are (as you'd expect) evaluating compounds on the basis of their commercial potential, which means what they can add to their existing portfolio. On the other hand, if you have no portfolio (or have only a small one) than any commercial prospect is worth a look. One hundred million dollars a year in revenue would be welcome news for a small company's first drug to market, whereas Pfizer wouldn't even notice it.

So (in my opinion) it's not that the big companies are averse to novel therapies. You can see them taking whacks at new mechanisms and unmet needs, but they tend to do it in the large-market indications - which I think may well be more likely to fail. That's due to two effects: if there are existing therapies in a therapeutic area, they probably represent the low-hanging fruit, biologically speaking, making later approaches harder (and giving them a higher bar to clear. And if there's no decent therapy at all in some big field, that probably means that none of the obvious approaches have worked at all, and that it's just a flat-out hard place to make progress. In the first category, I'm thinking of HDL-raising ideas in cardiovascular and PPAR alpha-gamma ligands for diabetes. In the second, there are CB1 antagonists for obesity and gamma-secretase inhibitors in Alzheimer's (and there are plenty more examples in each class). These would all have done new things in big markets, and they've all gone down in expensive flames. Small companies have certainly taken their cuts at these things, too, but they're disproportionately represented in smaller indications.

There's more interesting stuff in this paper, particularly on what regions of the world produce drugs and why. I'll blog about again, but this is plenty to discuss for now. The take-home so far? The great majority of drugs come from industry, but the industry is not homogeneous. Different companies are looking for different things, and the smaller ones are, other things being equal, more likely to push the envelope. More to come. . .

Comments (35) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 7, 2010

More on Garage Biotech

Email This Entry

Posted by Derek

Nature has a good report and accompanying editorial on garage biotechnology, which I wrote about earlier this year.

. . .Would-be 'biohackers' around the world are setting up labs in their garages, closets and kitchens — from professional scientists keeping a side project at home to individuals who have never used a pipette before. They buy used lab equipment online, convert webcams into US$10 microscopes and incubate tubes of genetically engineered Escherichia coli in their armpits. (It's cheaper than shelling out $100 or more on a 37 °C incubator.) Some share protocols and ideas in open forums. Others prefer to keep their labs under wraps, concerned that authorities will take one look at the gear in their garages and label them as bioterrorists.

For now, most members of the do-it-yourself, or DIY, biology community are hobbyists, rigging up cheap equipment and tackling projects that — although not exactly pushing the boundaries of molecular biology — are creative proof of the hacker principle. . .

The article is correct when it says that a lot of what's been written about the subject is hype. But not all of it is. I continue to think that as equipment becomes cheaper and more capable, which is happening constantly, that more and more areas of research will move into the "garage-capable" category. Biology is suited to this sort of thing, because there are such huge swaths of it that aren't well understood, and there are always more experiments to be set up than anyone can run.

And it's encouraging to see that the FBI isn't coming down hard on these people, but rather trying to stay in touch with them and learn about the field. Considering where and how some of the largest tech companies in the US started out, I would not want to discourage curious and motivated people from exploring new technologies on their own - just the opposite. Scientific research is most definitely not a members-only club; anyone who thinks that they have an interesting idea should come on down. So while I do worry about the occasional maniac misanthrope, I think I'm willing to take the chance. And besides, the only way we're going to be able to deal with the lunatics is through better technology of our own.

Comments (34) + TrackBacks (0) | Category: Biological News | Who Discovers and Why

September 30, 2010

The Hours You Put In

Email This Entry

Posted by Derek

Several people have brought this editorial (PDF) to my attention: "Where is the Passion?" It's from a professor at the Sidney Kimmel Center at Johns Hopkins, and its substance will be familiar to many people who've been in graduate school. Actually, the author's case can be summed up in a sentence: he walks the halls on nights and weekends; there aren't enough people in the labs. Maybe "kids these days!" would do the job even faster.

I'm not completely unsympathetic to this argument - but at the same time, I'm not completely unsympathetic to the people who've expressed a desire to punch the guy, either. The editorial goes on for quite a bit longer than it needs to to make its point, and I speak as someone who gets paid by the word for printed opinion pieces. It's written in what is probably a deliberately irritating style. But one of the lessons of the world is that annoying people whom you don't like are not necessarily wrong. What about this one?

One of the arguments here could be summed up as "Look, you people are trying to cure cancer here - don't you owe it to the patients (and the people who provided the money) to be up here working as hard as possible?" There's no way to argue with that, on its face - that's probably correct. But now we move on to the definition of "as hard as possible".

He's using hours worked as a proxy for scientific passion - an imperfect measure, to be sure. At the two extremes, there are people who are not in the lab who are thinking hard about their work, and there are people in the lab who are just hamster-wheeling and doing everything in the most brutal and stupid ways possible. But there is a correlation, especially in academia. (In many industrial settings, people are actively discouraged from doing too much lab work when they might be alone). If you're excited about your work, you're more likely to do more of it.

Unfortunately, it's hard to instill scientific excitement. And if anyone's going to do it at all, you'd think it would be the PIs of all these grad students. What surprises me is that more of them aren't falling back on the traditional grad-school substitute for passion, which is fear. The author does mention a few labs at his institute that have the all-the-time work ethic, and I'm willing to bet that good ol' anxiety and pressure have as much or more to do with their habits. And a little of that mixture is fine, actually, as long as you don't lay it on with a trowel.

So yes, I wish that there were more excited, passionate researchers around. But where I part company with this editorial is when it goes into get-off-my-lawn mode. The "You have to earn your way to a life outside the lab" attitude has always rubbed me the wrong way, and I've always thought that it probably demotivates ten people for every one that it manages to encourage. The author also spends too much time talking about the Good Old Days when people worked hard, with lousy equipment. In the dark! In the snow! And without all these newfangled kits and time-saving reagents! That makes me worry that he's confusing some issues. An idiot frantically digging a ditch with a spoon looks like a more passionate worker than someone who came through with a backhoe three hours ago, and is now doing something else.

Still, the point of all those time-saving kits is indeed to keep moving and do something else. Are people doing that? I'd rather judge the Sidney Kimmel Center by what comes out of it, rather than how late the lights burn at night. Is that the real "elephant in the room" that the editorial winds up invoking? That what the patients and donors would really be upset about is that not enough is coming out the other end of the chute? Now that's another problem entirely. . .

Update: Chemjobber has some questions.

Comments (138) + TrackBacks (0) | Category: Graduate School | Who Discovers and Why

September 22, 2010

Synthetic Chemistry: All Mined Out?

Email This Entry

Posted by Derek

In the wake of yesterday's revelation about the latest breakthrough in amide formation, one point that's come up is whether we getting into the era of diminishing returns in finding new synthetic methods.

My opinion? We may well - but we shouldn't have to be. It is true that we know how to do an awful lot of transformations. And I'd also subscribe to the view that we can, given no constraints of time, money, or heartbreak, synthesize basically any stable organic molecule that anyone can think up. In what we're pleased to call the real world, though, constraints of money and time (related by a similar equation to Einstein's mass-energy one) are always with us. (Heartbreak, well, that seems to be in constant supply).

So even though we can do so many things, everyone realizes that we need to be able to do them better. That applies even to amide formation. There are eleventy-dozen ways to form amides in the literature. But as some of the comments to yesterday's post show, sometimes you have to go pretty far down the list to get one that meets your needs. There is no set of conditions that is simultaneously easy, fast, cheap, nonracemizing, nontoxic, tolerant of all other functional groups, and generates a benign waste stream. Finding such a universal reaction is a fearsome goal, especially considering the number of alternatives that have already been tried.

This is why stoichiometric samarium metal is such a ridiculous idea. There are a lot of good ways to form amides. And there are a lot of lesser-known ways that might save you in tough situations. And there are lots of stupid, crappy ways. The world does not need another one of the latter. So what does it need?

Well, if you're going to stick with amide formation, you're going to have to find something closer to that ideal reaction, which won't be easy. Several other transformations are in that same category - lots of alternatives available, so something new had better be good. There are, though, plenty of other reactions that don't work so well, where improvements don't require you to approach so near perfection. A person's time might be better spent there than on trying to find the Perfect Amide Reaction, although the impact of finding the latter would probably be greater. Neither possibility excuses time spent on finding Another Lousy Amide Reaction.

And there are a lot of transformations that we can't do very well. Turn a phenol into an aromatic aldehyde in one step. Selectively epoxidize aromatic double bonds. Staple a secondary amine in where an aliphatic C-H used to be. Fluorinate at will. You can go beyond that to reactions that you can't even think up a mechanism: go around a benzene ring, switching out carbon for nitrogen. Pyridine, pyrimidine, pyrazine. . .I have no clue how to do that, or if it's even possible. Change a given oxazole into its corresponding thiazole. Turn a methoxy back into a methyl group. And so on - we sure can't do those, and the list goes on.

Hard stuff! But there are plenty of non-science-fictional possibilities out there, too. An eye to applications beyond pure synthetic chemistry helps. Look, for example, at Barry Sharpless and the copper-catalyzed triazole formation (click chemistry). That's a nice little transformation, and there are people who probably would have just made a nice little Org Lett paper out of it if they'd discovered it themselves. But it's such a versatile way to stitch things together that it's finding uses all over the place, and the end is not in sight. The world could most definitely use more chemistry that can take off in such fashion, and surely it's out there to be found.

I realize that we had this discussion just back in August, and earlier in the summer. But it keeps coming up. Seeing someone form amides with a pile of elemental samarium brings it right back to mind.

Comments (42) + TrackBacks (0) | Category: Chemical News | Who Discovers and Why

September 16, 2010

Six Sigma in Drug Discovery? Part One - Are Chemists Too Individual?

Email This Entry

Posted by Derek

I had an interesting email about a 2009 paper in Drug Discovery Today that has some bearing on the "how much compound to submit" question, as well as several other areas. It's from a team at AstraZeneca, and covers their application of "Lean Six Sigma" to the drug discovery process. I didn't see it at the time, but The title probably made me skip over it even if I had.

I'll admit my biases up front: outside of its possible uses in sheer widget-production-line settings, I've tended to regard Six Sigma and its variants as a buzzword-driven cult. From what I've been able to see of it, it generates a huge number of meetings and exhortations from management, along with a blizzard of posters, slogans, and other detritus. On the other hand, it gives everyone responsible a feeling that they've Really Accomplished Something, which is what most of these managerial overhauls seem to deliver before - or in place of - anything concrete. There, I feel better already.

On the other hand, I am presumably a scientist, so I should be willing to be persuaded by evidence. And if sensible recommendations emerge, I probably shouldn't be so steamed up about the process used to arrive at them. So, what are the changes that the AZ team says that they made?

Well, first off is a realization that too much time was being spent early on in resynthesis. The group ended up recommending that every lead-optimization compound be submitted in at least a 30 to 35 mg batch. From my experience, that's definitely on the high side; a lot of people don't seem to produce that much. But according to the AZ people, it really does save you time in the long run.

A more controversial shift was in the way that chemistry teams work. Reflecting on the relationship between overall speed and the amount of work in progress, they came up with this:

Traditionally, chemists have worked alongside each other, each working on multiple target compounds independently from the other members in the team. Unless managed very carefully by the team leader, this model results in a large, and relatively invisible, amount of work in progress across a team of chemists. In order to reduce the lead time for each target, it was decided to introduce more cooperative team working, combined with actively restricting the work in progress. The key driver to achieve and sustain these two goals was the introduction of a visual planning system that enables control of work in progress and also facil-
itates work sharing across the team. Such a visual planning system also allows the team to keep track of ideas, arrival of starting materials, ongoing synthesis and compounds being purified. It also makes problems more readily recognizable when they do occur.

We have reflected on why chemistry teams have always been organized in such an individual-based way. We believe that a major factor lies in the education and training of chemists at universities, in particular at the doctoral and postdoctoral level, which is always focused on delivery of separate pieces of work by the students. This habit has then been maintained in the pharmaceutical industry even though team working, with chemists supporting each other in the delivery of compounds, would be beneficial and reduce synthesis lead times.

OK, that by itself is enough to run a big discussion here, so I think I'll split off the rest of the AZ ideas into another post or two. So, what do you think? Is the "You do your compounds and I'll do mine" style hurting productivity in drug research? Is the switch to something else desirable, or even possible? And if it is, has AstraZeneca really accomplished it, or do they just say that they have? (Nothing personal intended there - it's just that I've seen a lot of "Now we do everything differently!" presentations over the years. . .) After all, this paper is over a year old now, and presumably covers things that happened well before that. Is this how things really work at AZ? Let the discussion commence!

Comments (50) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs | Who Discovers and Why

September 7, 2010

Columns Outside The Doors

Email This Entry

Posted by Derek

Nature Reviews Drug Discovery has an article on behavior in large drug organizations, which they put together after interviewing a long list of current and former R&D heads. Many of the recommendations are non-startling (find ways to reward people who are willing to take calculated risks, encourage independent thinking, all those things that are easy to write down and hard to implement). One part near the end caught my eye, though:

Companies should examine what we term the 'columns outside the doors' phenomenon and the subtle impact that this form of recognition might have on entrepreneurial behaviour. Smith described this phenomenon, which occurs across the world: as start-up companies become successful, they are relocated from humble laboratories to grander buildings with columns outside their doors. Interestingly, such edifices often violate the observed inverse square relationship between communication among scientists in laboratories and the distance between these laboratories. We offer this insight more as a provocative thought than as a firm recommendation.

And what what reminded me of was a very similar observation by C. Northcote Parkinson, of Parkinson's Law fame:

The outer door, in bronze and glass, is placed centrally in a symmetrical facade. Polished shoes glide quietly over shining rubber to the glittering and silent elevator. The overpoweringly cultured receptionist will murmur with carmine lips into an ice-blue receiver. She will wave you into a chromium armchair, consoling you with a dazzling smile for any slight but inevitable delay. Looking up from a glossy magazine, you will observe how the wide corridors radiate toward departments A, B, and C. From behind closed doors will come the subdued noise of an ordered activity. A minute later and you are ankle deep in the director’s carpet, plodding sturdily toward his distant, tidy desk. Hypnotized by the chief’s unwavering stare, cowed by the Matisse hung upon his wall, you will feel that you have found real efficiency at last.

In point of fact you will have discovered nothing of the kind. It is now known that a perfection of planned layout is achieved only by institutions on the point of collapse. . .

It is by no means certain that an influential reader of this chapter could prolong the life of a dying institution merely by depriving it of its streamlined headquarters. What he can do, however, with more confidence, is to prevent any organization strangling itself at birth. Examples abound of new institutions coming into existence with a full establishment of deputy directors, consultants and executives; all these coming together in a building specially designed for their purpose. And experience proves that such an institution will die. . .

Readers may have a few examples in mind from the drug industry. (The freshly constructed labs at Sterling, for example, completed around the time that Kodak was wiping the place out, are well spoken of). So, those of you in temporary quarters, jammed into buildings that don't quite work, may not be as bad off as you might think.

Comments (25) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

September 1, 2010

Scientia Est Experientia

Email This Entry

Posted by Derek

Chad Orzel has a post up on the two halves of physics, and about how people tend to forget one of them: the experimentalists. I think he's right, and the problem is the glamorous coating that began to stick to theoretical physics in the early 20th century (and has never completely flaked away).

Several things led to that split: the startling predictions of relativity and quantum mechanics, borne out by experimentalists right down to the most unlikely-sounding results, for one. The Manhattan Project, which was a triumph of engineering, but was seen, I'd say, by many in the general public as sheer theory somehow made real. The personal fame of people like Einstein, and the fame of later practitioners like Feynman and Hawking. All of this made experimental physicists seem either like 19th-century relics, or (more often) made them confused in the public's mind with theorists from the very beginning. (The only post-1900 physicist that I can think of who was both a great theorist and a great experimentalist was Enrico Fermi). Update - qualified that to take care of off-the-charts figures like Isaac Newton.

Chemistry, on the other hand, has always been an experimental science in the public mind. Say "chemist", and people think of someone in a lab coat, in a lab, surrounded by chemicals. "Theoretical chemistry" is not a phrase with any popular currency, as opposed to "theoretical physics". Even many chemists tend to think of someone who spends all their time on theory as being close to a physicist, or even a mathematician.

Some of the practitioners don't do much to clarify matters. Witness the great Lars Onsager, who really was a chemist (and won the 1968 Nobel for it). But his PhD dissertation, which had to be whipped up when Yale discovered he didn't have a doctorate, was (disconcertingly) on Mathieu functions, and Yale's math department said that they'd be glad to grant him the degree if the chemists had any problem with it. Very few people are competent to read all of Onsager's Collected Works.

I agree with Orzel, though, that experiment is the beating pulse of any scientific field. That's the worry that some people have had about physics in recent years, that it's strayed into areas where experiments cannot help. Chemistry will, I think, never have that problem. But we've got others.

Comments (18) + TrackBacks (0) | Category: Who Discovers and Why

August 19, 2010

Not The End. Not At All

Email This Entry

Posted by Derek

All right, given the way things have been going the last few years, it's easy to wonder if there's a place for medicinal chemistry at all - even if there's a place for drug discovery. There is. People are continuing to get sick, with diseases that no one can do much about, and the world would be a much better place if that weren't so. I also believe that such treatments are worth money, and that the people who devote their careers to finding them can earn a good living by doing so.

So why are fewer of us doing so? Because - and it needs no ghost come from the grave to tell us this - we're not finding as many of them as we need to, and it's costing us too much when we do. That's not sustainable, but drug discovery itself has to continue. We can't go on, we'll go on. But what we have to do is find new ways of going on.

I refuse to believe that those ways aren't out there somewhere. We do what we do so poorly, because we still understand so little - I can't accept that this is the best we're capable of. It won't take miracles, either. Think of the clinical failure rates, hovering around 90% in most therapeutic areas. If we only landed flat on our faces eight out of ten times in the clinic, we'd double the number of compounds that get through.

I think that we're in the worst stage of knowledge about disease and the human body. We have enough tools to get partway into the details, but not enough to see our way through to real understanding. Earlier ages were ignorant, and (for the most part) they knew it. (Lewis Thomas's The Youngest Science
has a good section on medicine as his own father practiced it - he was completely honest about how little he could do for most of his patients and how much he depended on placebos, time, and hope). Now, thanks to advances in molecular and cell biology, we've begun to open a lot of locked boxes, only to find inside them. . .more locked boxes. (Sorry about all these links. For some reason literature is running away with me this morning). We get excited (justifiably!) at learning things that we never knew, uncovering systems that we never suspected, but we've been guilty (everyone) of sometimes thinking that the real, final answers must be in view. They aren't, not yet.

Pick any therapeutic area you want, and you can see this going on. Cancer: it starts out as dozens of dread diseases, unrelated. Then someone realizes that in each case, it's unregulated cell growth that's going on. The key! Well, no - because we have no idea of how unregulated cell growth occurs, nor how to shut it off. Closer inspection, years and years of closer inspection, yields an astonishing array of details. Growth factor signaling, bypassed cell-death switches and checkpoints, changes in mitotic pathways, on and on. Along the way, many of these look like The Answer, or at least one of The Answers. Think about how angiogenesis came on as a therapeutic idea - Judah Folkman really helped get across the idea that some tumors cause blood vessels to grow to them, which really was a startling thought at the time. The key! Well. . .it hasn't worked out that way, or not yet. Not all tumors do this, and not all of them totally depend on it even when they do, and the ones that do turn out to have a whole list of ways that they can do it, and then they can mutate, and then. . .

There, that's where we are right now. Right in the middle of the forest. We know enough to know that we're surrounded by trees, we know the names of many of them, we've learned a lot - but we haven't learned enough yet to come out the other side. But here's the part that gives me hope: we keep on being surprised. Huge, important things keep on being found, which to me means that there are more of them out there that we haven't found yet. RNA! There's one that's happened well in the middle of my own professional career. When I started in this business, no one had any clue about RNA interference, double-stranded RNAs, microRNAs, none of it. All of it was going on without anyone being aware, intricate and important stuff, and we never knew. How many more things like that are waiting to be uncovered?

Plenty, is my guess. We keep pulling back veils, but the number of veils is finite. We're still ignorant, but we're not going to remain ignorant. We will eventually know the truth, and it'll do what the truth has long been promised to do: make us free.

But we don't have to wait until we know everything. As I said above, just knowing a bit more than we do now has to help. A little more ability to understand toxicology, a better plan to attack protein-protein targets, more confidence in what nuclear receptors can do, another insight into bacterial virulence, viral entry, cell-cycle signaling, glucose transport, lipid handling, serotonin second messengers, bone remodeling, protein phosphorylation, immune response, GPCR mechanisms, transcription factors, cellular senescence, ion channels. . .I could go on. So could you. The list is long, really long, and any good news anywhere on it gives us something else to work on, and something new to try.

So this is a rough time in the drug industry. It really is. But these aren't death throes. They're growing pains. We just have to survive them, either way.

Comments (72) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

August 13, 2010

Alzheimer's Markers and Collaboration

Email This Entry

Posted by Derek

I'm of two minds on this New York Times article on Alzheimer's research. It details some recent progress on biomarkers for the disease, and that work does look to be useful. A lot of people have proposed diagnostics and markers for Alzheimer's and its progression over the years, but none of them have really panned out. If these do, that's something we haven't had before.

But my first problem is something we were talking about here the other day. Biomarkers are not necessarily going to help you in drug development, not unless they're very well validated indeed. We really do need them in Alzheimer's research, because the disease progression is so slow. And this effort is really the only way to find such things - a good-sized patient sample, followed over many years. But unfortunately, 800 people (divided out into different patient populations) may or may not be enough, statistically. We're now going to have to take the potential assays and markers that this work has brought up and see how well they work on larger populations - that's the only way that they'll be solid enough to commit a clinical trial to them. Both the companies developing drugs and the regulatory agencies will have to see convincing numbers.

That general biomarker problem is something we really can't do anything about; the only cures are time, effort, money, and statistical power. So it's not a problem peculiar to Alzheimer's (although that's a tough proving ground), or to this collaborative effort. But now we come to the collaborative effort part. . .overall, I think that these sorts of things are good. (This gets back to the discussions about open-source drug discovery we've been having here). Bigger problems need sheer manpower, and smaller ones can always benefit from other sets of eyes on them.

The way that this Alzheimer's work puts all the data out into the open actually helps with that latter effect. All sorts of people can dig through the data set, try out their hypotheses, and see what they get. But I think it's important to realize that this is where the benefit comes from. What I don't want is for people to come away thinking that the answer is that we need One Big Centralized Effort to solve these things.

My problem with the OBCE model, if I can give it an acronym, is that it tends to cut back on the number of ideas and hypotheses advanced. Big teams under one management structure don't tend to work out well when they're split up all over the place. There's managerial (and psychological) pressure, from all directions, to get everyone on the same idea, to really get in and push that one forward with all the resources. This is why I worry about all the consolidation in the drug industry: fewer different approaches get an airing when it's all under the roof of one big company.

So this Alzheimer's work is just the sort of collaboration I can admire: working on a big problem, sharing the data, and leaving things open so that everyone with an idea can have a crack at it. I just hope that people don't get the wrong idea.

Comments (3) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Press Coverage | Who Discovers and Why

August 9, 2010

Maybe We Should Make It More of a Game

Email This Entry

Posted by Derek

David Baker's lab at the University of Washington has been working on several approaches to protein structure problems. I mentioned Rosetta@home here, and now the team has published an interesting paper on another one of their efforts, FoldIt.

That one, instead of being a large-scale passive computation effort, is more of an active process - in fact, it's active enough that it's designed as a game:

We hypothesized that human spatial reasoning could improve both the sampling of conformational space and the determination of when to pursue suboptimal conformations if the stochastic elements of the search were replaced with human decision making while retaining the deterministic Rosetta algorithms as user tools. We developed a multiplayer online game, Foldit, with the goal of producing accurate protein structure models through gameplay. Improperly folded protein conformations are posted online as puzzles for a fixed amount of time, during which players interactively reshape them in the direction they believe will lead to the highest score (the negative of the Rosetta energy). The player’s current status is shown, along with a leader board of other players, and groups of players working together, competing in the same puzzle.

So how's it working out? Pretty well, actually. It turns out that human players are willing to do more extensive rearrangements to the protein chains in the quest for lower energies than computational algorithms are. They're also better at evaluating which positions to start from. Both of these remind me of the differences between human chess play and machine play, as I understand them, and probably for quite similar reasons. Baker's team is trying to adapt the automated software to use some of the human-style approaches, when feasible.

There are several dozen participants who clearly seem to have done better in finding low-energy structures than the rest of the crowd. Interestingly, they're mostly not employed in the field, with "Business/Financial/Legal" making up the largest self-declared group in a wide range of fairly even-distributed categories. Compared to the "everyone who's played" set, the biggest difference is that there are far fewer students in the high-end group, proportionally. That group of better problem solvers also tends to be slightly more female (although both groups are still mostly men), definitely older (that loss of students again), and less well-stocked with college graduates and PhDs. Make of that what you will.

Their conclusion is worth thinking about, too:

The solution of challenging structure prediction problems by Foldit players demonstrates the considerable potential of a hybrid human–computer optimization framework in the form of a massively multiplayer game. The approach should be readily extendable to related problems, such as protein design and other scientific domains where human three-dimensional structural problem solving can be used. Our results indicate that scientific advancement is possible if even a small fraction of the energy that goes into playing computer games can be channelled into scientific discovery.

That's crossed my mind, too. In my more pessimistic moments, I've imagined the human race gradually entertaining itself to death, or at least to stasis, as our options for doing so become more and more compelling. (Reading Infinite Jest a few years ago probably exacerbated such thinking). Perhaps this is one way out of that problem. I'm not sure that it's possible to make a game compelling enough when it's hooked up to some sort of useful gear train, but it's worth a try.

Comments (16) + TrackBacks (0) | Category: Biological News | In Silico | Who Discovers and Why

August 6, 2010

Organic Chemistry: A Lack of Challenges?

Email This Entry

Posted by Derek

I had an interesting email in response to my post on returning from the SciFoo meeting. I have to say, there weren't too many chemists at that one - not that it's a representative slice of science, to be sure. (Theoretical physicists and computer science people were definitely over-represented, although they were fun to talk to).

But perhaps there's another reason? I'll let my correspondent take it from here:

I worry a lot about organic chemistry, about the state of the discipline. I worry about the relative lack of grand challenges, and that most academic work is highly incremental and, worse, almost entirely the result of screening rather than design. There is still so little predictive power (at least in academia) in drug or catalyst discovery. I have a theory that the reason we're so brutal with each other in paper and grant refereeing is because we're essentially dogs under the table fighting for scraps.

There are big exceptions, which make me excited to be a scientist. There's usually something in Nature Chemistry that has the wow factor, for example. They're just so rare. . .

He went on to point out that other fields have results that can wow a general audience more easily, which can make it harder for even excellent work in chemistry to get as high a profile. As for that point, there may be something to it. High-energy physics and cosmology would, you'd think, be abstract enough to drive away the crowds, but they touch on such near-theological questions that interest remains high. (Why do you think that the press persists in calling the Higgs boson the "God particle"?) And biology, for its part, always can call on the familiarity of everyone with living creatures, possible relevance to medical advances, and the sheer fame of DNA. All these fields have lower-profile areas, or ones that are harder to explain, but they always have the big marquee topics to bring in the crowds.

Chemistry's big period for that sort of thing was. . .well, quite a while ago. We're at one remove from both the Big Overarching Questions at the physics end and the Medical Breakthroughs at the biology end, so our big results tend to get noticed according to how they relate to something else. If (for example) chemists achieved some breakthrough in artificial photosynthesis, it would probably be seen by the public as either physics or biology, depending on the inorganic/organic proportions involved.

But what about the first point: are we really running out of big questions to answer in this field? It's easy to think so (and sometimes I do myself), but I'm not so sure. Off the top of my head, I can think of several gigantic advances that chemistry could help to deliver (and hasn't yet). Room-temperature organic superconductors. That artificial photosynthesis I mentioned, to turn the world's excess carbon dioxide into organic feedstocks. Industrial spider-silk production. Small molecules to slow the aging process. A cheap way to lay down diamond layers on surfaces. And I haven't even mentioned the whole nanotechnology field, which is going to have to depend on plenty of chemistry if it's ever to work at all.

Now, it's true that looking through a typical chemistry journal, you will not necessarily find much on any of these topics, or much to make your pulse race at all. But that's true in the journals in even the most exciting fields. Most stuff is incremental, even when it's worthwhile, and not all of it is even that. And it's also true that of the big chemistry challenges out there, that not all of them are going to need organic synthesis to solve them. But many will, and we should be encouraging the people who feel up to taking them on to do so. Not all of them do. . .

Comments (56) + TrackBacks (0) | Category: Chemical News | Who Discovers and Why

August 2, 2010


Email This Entry

Posted by Derek

I'm back from the Sci Foo meeting out at Google's HQ, having taking the jolly red-eye flight from San Jose. And since I'll doubtless be increasingly incoherent as the day goes on, I thought I'd better go ahead and post now.

There was a wide (and strange) variety of people at this meeting - tilted towards comp sci and theoretical physics, I'd say, with a fair number of biologists. Chemists were thin on the ground in comparison. But this wasn't a chemistry meeting by any means. It was more of a chance to meet a lot of people who are each doing very interesting work in their fields, including some who are probably doing the absolute most interesting work in their fields.

And there's something that I noticed about these folks. People working at that level, almost all of them, have something in common: they're extremely happy to be doing what they do. Listening to Giovanni Amelino-Camelia and Lee Smolin talk about quantum gravity theories (and the data that are now coming in from gamma-ray bursters which could start sorting these things out), you could see that they both feel as if they're doing what they're here on Earth to do. "It's like Christmas", Amelino-Camelia told me, grinning, when I asked him about the GRB data. Pete Worden sounds the same way when he talks about wanting to explore caves on Mars, Yves Rossy when he talks about strapping on a jet-powered wing to his back, and so on. There's nothing they'd rather be doing.

I have some days like that, but I should try to have more. The conference was a good reminder to try to work at the limits of your capabilities, to take on the hardest problems that you can stand to face. It's worth it. You can see it in the faces of people who live that way. Melville was right - "Genius, round the world, stands hand in hand, and one shock of recognition runs the whole circle round."

Comments (18) + TrackBacks (0) | Category: Who Discovers and Why

July 8, 2010

Why Close One Research Site Over Another?

Email This Entry

Posted by Derek

There's been so much traffic here today that it's actually been a bit difficult to get in to write another post. And unfortunately it's all due to the Merck announcement. Some sites that have a long and distinguished history of drug discovery are set to be closed up as if they were so many redundant discount store locations.

And that shows you that no one in the business is thinking about these things - how hey, this site has really done a lot for us (or more than they should have, given their size), and maybe we should hold on to them. As past closings at other companies have shown, that's probably one of the last factors on the list, and most of the time it probably doesn't even come in at all.

What matters, I'd say, is what you'd think matters: the sheer accounting cost. How much does it cost to keep Facility X open? How much would it cost to close it? And how much would we save, compared to what we're giving up? It's that last part where the real arguing starts, because there are many people (I'm one, sometimes) that say that research cultures vary from place to place, and that some sites just seem to have a better history of discovery than others. They're not interchangeable.

But it's very hard to make that argument. This sort of thing doesn't show up in the financial statements, and it's hard to quantify. You also can't count on it, either, because some places will have a good run of many years, and then (for some reason) go flat. Despite what consultants will tell you, I don't think that anyone has figured out what exactly makes a research culture work. That makes fixing a broken one a tall order, and it also makes it very hard to raise one in the way that you want it. It's a combination of the individual people, their managers, the projects they get to work on, the experience that they have (or get) with success. . .all sorts of hard-to-deal-with variables.

It's not just in research institutes, either: why did Hungary produce its ferocious run of world-class scientists during the mid-20th century? Who wouldn't want to reproduce such a thing, or remake the old Bell Labs, or whatever your favorite success story might be. The fact that no one seems to be able to do this on demand argues strongly that no one knows how. And if no one knows how, no one's going to decide based on it, either. Sad.

Comments (52) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

July 1, 2010

GSK's Biotechy World

Email This Entry

Posted by Derek

The Wall Street Journal is out today with a big story on GlaxoSmithKline's current research structure. The diagnosis seems pretty accurate:

Glaxo's experiment is a response to one of the industry's most pressing problems: the failure of gigantic research staffs, formed through a series of mega mergers, to discover new drugs. The mergers helped companies amass potent sales-and-marketing arms, but saddled their R&D with innovation-stifling bureaucracy. . .

The company's current strategy is to break things down into even smaller teams (often with their own names and logos) and to try to apply small-company incentives to them. That goes for both the positive and negative incentives:

The scientists in Glaxo's new biotech-esque groups know the clock is ticking. Called discovery performance units, or DPUs, the groups are about halfway through the three-year budgets they were given in 2008. Glaxo has made it clear that if the team members don't produce, they could get laid off. . .(the company also) says it's trying to get closer to the financial rewards of biotech. In some cases, it is setting aside "a pool of money" for scientists involved in a certain project. . .each time their experimental drug clears a certain hurdle, they get part of the money. . .

Of course, as the article also makes clear, the company has been through supposed newer-and-better re-orgs before. And that included schemes to break the company's research into more independent units. Those "Centers of Excellence in Drug Discovery" were supposed to be the last word eight or ten years ago, but apparently that didn't quite work out. The current philosophy seems to be that the idea didn't go far enough.

True or not? History doesn't give a person much reason for optimism when a large company says that it's going to get more nimble and less bureaucratic. You can make a very good living printing up the posters and running the training seminars about that stuff, but actually getting it to work has been. . .well, has anyone gotten it to work? Andrew Witty, the company's CEO says in the article that he doesn't see any contradiction in having "hugely successful entrepreneurial innovation" inside a big company, but real examples of that are thin on the ground - especially compared to the number of examples of such innovation being fought to the ground when it attempts to spring up.

That's not to say that this approach can't improve things at GSK. I think it's bound to be a good thing to turn people loose to make more of their own decisions, without feeling as if there's someone hovering over their shoulder all the time. But I don't know if it's going to be the revolution that they're hoping for (or the one that they might need).

Comments (62) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

June 25, 2010

What To Do With The Not-Quite-Worthless

Email This Entry

Posted by Derek

Yesterday morning I went on and on about the low quality of much of what gets published in the scientific literature. And indeed, the low end is very likely of no use to anyone, except (apparently) the people publishing it. But what to do with the rungs above that?

For organic chemistry, those are occupied by papers that report new compounds of little interest to anyone. But you never know - they might be worth someone else's time eventually. It's unlikely that any of these things will be the hinge on which a mighty question turns, but knowing that they've been made (and how), and knowing what their spectra and properties are could save someone time down the line when they're doing something more useful. These are real bricks in the huge construction of scientific knowledge, and while they're not worth much, it's more than zero. That's the value I assign to the hunks of mud that some people offer instead, or the things that look like real bricks but turn out to be made out of brick, yes, but about one millimeter thick and completely hollow.

So what to do with work that's mostly reference data for the future? It shouldn't have to appear in physical print, you'd think. How about the peer-reviewed journal part? Well, peer review is not magic. As it stands, that sort of information is the least-reviewed part of most papers. If someone tells you that they've made Compound X and Compound Y, and the synthesis isn't obviously crazy, you tend to take their word for it. It's a rare reviewer that gets all the way down to the NMR spectra in the supplementary material, that's for sure. And if one does, and the NMR spectra look reasonably believable, well, what else can you do? Even so, every working chemist has dealt with literature whose procedures Just Don't Work, and all those papers passed some sort of editorial review process at some point.

No, peer review is not going to do much to improve the quality of archival data. If someone really wants to fill up the low-level bins with junk, there's not much stopping them. You could sit down and draw out a bunch of stuff no one's ever made before, come up with plausible paper syntheses of all of it, use software to predict reasonable NMR spectra (which you might want to jitter around a bit to cover your tracks), and just flat-out fake the mass spec and elemental analyses. Presto, another paper that no one will ever read, until eventually someone has a reason to make similar compounds and curses your name in the distant future. The problem is, such papers will do you no real good, since they'll appear in the crappiest journals and pick up no citations from anyone.

Perhaps there should be a way to dump chemical data directly into some archives, the way X-ray data goes into the Protein Data Bank. That wouldn't count for much, but it would capture things for future use. Having it not count much would decrease the incentive for anyone to fill it full of fakery, too, since there would be even less point than usual. And before anyone objects to having a big pile of non-peer-reviewed chemical data like this, keep in mind that we already have one: it's called the patent literature, and it can be quite worthwhile. Although not always.

Comments (31) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

June 24, 2010

All Those Worthless Papers

Email This Entry

Posted by Derek

That's what this article at the Chronicle of Higher Education could be called. Instead it's headlined "We Must Stop the Avalanche of Low-Quality Research". Which still gets the point across. Here you have it:

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. . .

If anything, this underestimates things. Right next to the never-cited papers are the grievously undercited ones, most of whose referrals come courtesy of later papers published by the same damn lab. One rung further out of the pit are a few mutual admiration societies, where a few people cite each other, but no one else cares very much. And then, finally, you reach a level that has some apparent scientific oxygen in it.

The authors of this article are mostly concerned about the effect this has on academia, since all these papers have to be reviewed by somebody. Meanwhile, libraries find themselves straining to subscribe to all the journals, and working scientists find the literature harder and harder to effectively cover. So why do all these papers get written? One hardly has to ask:

The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn't better. At some point, quality gives way to quantity.

Academic publication has passed that point in most, if not all, disciplines—in some fields by a long shot. For example, Physica A publishes some 3,000 pages each year. Why? Senior physics professors have well-financed labs with five to 10 Ph.D.-student researchers. Since the latter increasingly need more publications to compete for academic jobs, the number of published pages keeps climbing. . .

We can also lay off some blame onto the scientific publishers, who have responded to market conditions by starting new journals as quickly as they can manage to launch them. And while there have been good quality journals launched in the past few years, there have been a bunch of losers, too - and never forget, the advent of a good journal will soak up more of the worthwhile papers, lifting up the ever-expanding pool of mediocre stuff (and worse) by capillary action. You have to fill those pages somehow!

If this problem is driven largely by academia, that's where the solution will have to come from, too. The authors suggest several fixes: (1) limit job applications and tenure reviews to the top five or six papers that a person has to offer. (2) Prorate publication records by the quality of the journals that the papers appeared in. (3) Adopt length restrictions in printed journals, with the rest of the information to be had digitally.

I don't think that those are bad ideas at all - but the problem is, they're already more or less in effect. People should already know which journals are the better ones, and look askance at a publication record full of barking, arf-ing papers from the dog pound. Already, the best papers on a person's list count the most. And as for the size of printed journals, well. . .there are some journals that I read all the time whose printed versions I haven't seen in years.

No, these ideas are worthy, but they don't get to the real problem. It's not like all the crappy papers are coming from younger faculty who are bucking for tenure, you know. Plenty more are emitted by well-entrenched groups who just generate things that no one ever really wants to read. I think we've made it too possible for people to have whole scientific careers of complete mediocrity. I mean, what do you do, as a chemist, when you see another paper where someone found a reagent to dehydrate a primary amide to a nitrile? Did you read it? Of course not. Will you ever come back to it and use it? Not too likely, considering that there are eight hundred and sixty reagents that will already do that for you. We get complaints all the time about me-too drugs, but the me-too reaction problem is a real beast.

Now, I realize that by using the word "mediocrity" I'm in danger of confusing the issue. The abilities of scientists are distributed across a wide range - I doubt if it's a true normal distribution, but there are certainly people who are better and worse at this job. But I'm complaining on the absolute scale, rather than the relative scale. I know that there's always going to be a middle mass of scientific papers, from a middle mass of scientists: I just wish that the whole literature was of higher quality overall. A chunk of what now goes into the mid-tier journals should really be filling up the bottom-tier ones, and most of the stuff that goes into those shouldn't be getting done in the first place.

I suppose what bothers me is the number of people who aren't working up to their potential (although I don't always have the best position to argue that from myself!) Too many academic groups seem to me to work on problems that are beneath them. I know that limits in money and facilities keep some people from working on interesting things, but that's rare, compared to the number who'd just plain rather do something more predictable. And write predictable papers about it. Which no one reads.

Comments (41) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

June 1, 2010

The Truth Shall Make Ye. . .Unhappy?

Email This Entry

Posted by Derek

Here's an article on a topic that's come up around here before: psychological and cognitive barriers to discovering a new drug. These include confirmation bias, poor risk assessment, an over-reliance on recent experience, etc. The tricky part is that some of these cognitive mistakes might actually be reasonable adaptations to the problems of drug research itself:

The history of science and medicine is full of wrong ideas that prevailed for many years, despite mounting evidence to the contrary: phlogiston, the four humours, spontaneous generation of life and inheritance of acquired traits. These are examples of ‘confirmation bias’, which means that ‘we tend to subconsciously decide what to do before figuring out why we want to do it’ and seek evidence that tends to confirm rather than refute our initial judgment. . . In medicine, such ‘bad science’ can cost many lives; therefore, major institutions and professions have procedures and rules – notably peer review – that seek to protect against the pernicious effects of excessive self-confidence (setting aside, here, the issue of blatant fraud).

In our direct experience, discovery scientists admit that false optimism helps keep them functioning despite the recognized reality that most of their projects fail. This seems an essential trait of scientific heroes of the past yet, paradoxically, might count as a cognitive error in a business setting:

Now there's one that I hadn't considered, although I have thought a lot over the years about the differences between the business side of the industry and the discovery side. I'm not sure that "false optimism" is what keeps me going, though - I try to realize that most projects fail, but I try to make sure that they didn't fail because of something that I did (or didn't do) myself. The authors, though, quote from another study of the same phenomenon, which raises an interesting question:

Given the high cost of mistakes, it might appear obvious that a rational organization should want to base its decisions on unbiased odds, rather than on predictions painted in shades of rose. However. . .optimistic self-delusion is a diagnostic indication of mental health and well-being. . .The benefits of unrealistic optimism in increasing persistence in the face of difficulty have been documented. . . The observation that realism can be pathological and self-defeating raises troubling questions for the management of information and risk in organizations. Surely, no one would want to be governed entirely by wishful fantasies, but is there a point at which truth becomes destructive and doubt self-fulfilling?’

And that brings up a phrase that I use often, that it's easy to sit in the back of a conference room and tell people that their ideas aren't going to work. And you're right well over 90% of the time if you do that, but to what end? I'm going to have to think about this idea of "destructive truth" a bit more, but I wanted to put it out there for comments. I'll return to the whole cognitive bias problem as well, because there's more to it than just this. . .

Comments (28) + TrackBacks (0) | Category: Who Discovers and Why

May 28, 2010

Scientific Discovery: Getting Older (And Less Lonely)

Email This Entry

Posted by Derek

The NBER (National Bureau of Economic Research) has been looking at the patterns of scientific publication and grant awards in the US, and has noticed some interesting trends. According to Inside Higher Ed, the study found (first off) that scientific publications are increasing at about 5.5% a year, and the report suggests that this might mean that any individual who reads at the same rate is seeing their own current knowledge decrease by the same amount.

I'm not so sure about that. While there are indeed more papers every year, the marginal utility of each new paper isn't necessarily very high - if I can switch into econ-speak myself. That's especially true if increased numbers of articles are due to new journals that end up (directly or indirectly) pulling things into the literature that wouldn't have even been published otherwise, simply because journals need to fill their pages. That said, the volume of interesting science done (and to be read about) each year is still increasing - I certainly can't deny that - but it would be a mistake to assume that "Scientific Journal Publications" are some sort of homogeneous good that can be measured as such.

Two other trends that were spotted make more sense to me: one is that the average number of co-authors is rising steadily. You wonder if that last part is just due to those physics papers that have six hundred people on them, but it seems to be the case across all disciplines. There are fewer and fewer solo scientific publications than there used to be, which confirms my own experience looking across the the chemistry literature.

Another trend is that fewer highly-cited big-news papers are coming from the younger end of the age distribution. The report says that "Peak productivity has increased by about 8 years, with the effect coming entirely from a collapse in productivity at young ages." The average ages for discoveries that later went on to win Nobels has been going up, as has the average age at which a scientist appears on their first patent. And that's worth thinking about - is it that our educational setup in the sciences sends people out into the fray at later and later ages? Or that the disciplines themselves have gotten more complicated, requiring a longer period before a substantial contribution can be made?

I think that a big factor is that younger scientists probably feel insecure working on high-risk high-reward projects. In academia, they're fighting for grant money and tenure, and I think that many people in that situation are careful about balancing "exciting and groundbreaking" against "likely to produce solid, publishable results". And industrial scientists tend to need more experience before they can make a big discovery as well, since the more applied fields have a larger body of specific knowledge built up.

The report contrasts these trends against the long-held image of the brave young researcher pushing toward a big discovery. I'd argue that the Nobel itself suffers from this problem, with its strict three-names-only rule. It's my impression that the committees that decide the prize have been having a harder and harder time of it over the years trying to find a way to stick to that. It has (inevitably) led to a number of deserving people getting left out - as well as a number of deserving discoveries that couldn't be narrowed down well enough. (Organic chemistry has the metal-catalyzed couplings as an example).

Finding ways to recognize large (often interdisciplinary) teams would be one step. Another change that might need to be made could include easing up a bit on the younger grant recipients, realizing that it's going to be increasingly difficult for them to hit things out of the park at that point in their careers. Could that also allow some of the better ones to work in tougher areas, with less fear of the consequences of failure?

Comments (22) + TrackBacks (0) | Category: Who Discovers and Why

May 26, 2010

India's Research Culture

Email This Entry

Posted by Derek

R. A. Mashelkar of India's National Chemical Laboratory has a provocative opinion piece in Science on the research culture of his country. And it brings up a point that I don't think anyone could deny: that the attitudes of a society can affect (for better or worse) its ability to participate in scientific research:

Nobel Laureate Richard Feynman believed that creative pursuit in science requires irreverence. Sadly, this spirit is missing from Indian science today. As other nations pursue more innovative approaches to solving problems, India must free itself from a traditional attitude that condemns irreverence, so that it too can address local and global challenges and nurture future leaders in science. But how can the spirit of adventurism come to Indian science?

The situation has deep roots in Indian culture and tradition. The ancient Sanskrit saying "baba vakyam pramanam" means "the words of the elders are the ultimate truth," thus condemning the type of irreverence inspired by the persistent questioning that is necessary for science. The Indian educational system, which is textbook-centered rather than student-centered, discourages inquisitive attitudes at an early age. Rigid unimaginative curricula and examinations based on single correct answers further cement intolerance for creativity. And the bureaucracy inherited from the time of British rule over-rides meritocracy.

He points out that India's greatest scientific names (and there are some heavy hitters) got there in spite of such pressures, not because of them. It's not like this issue hasn't been aired out in India before; I've had Indian colleagues say much the same things to me. And these attitudes can be found in many countries, of course - you can find them here in the US. Mediocre researchers the world over keep their heads down, avoid projects that make their bosses (or themselves) nervous, and keep within the bounds of the literature.

The key, though, is to make sure that people who want to try risky ideas are able to do it. If they're inhibited by pressure from their bosses or their peers, the productivity of a whole country's science can suffer. Not everyone is capable (or willing) to go out on the edge, but it's crucial that the people who can and will are able to do so. That's where we've excelled in the US, where we have an entire infrastructure (the venture capital system) for funding things that are probably not going to work. It's not like we're perfect at this process, but we're better than many others.

But India appears to be moving in the right direction - Mashelkar goes into some details on the way that scientific education is changing. The next step will be to give risk-tolerant investors ways to back the good ideas that emerge. That's a tough one, and a lot of countries have been unable to quite get there. Sometimes the needed investors aren't there, or aren't quite well-capitalized (or willing) enough, or there aren't enough good ideas floating around, or there are no good ways to get the ideas and the money together. Personally, I think India's going to get there, and that it'll be a good thing for the country, and for the rest of the world.

Comments (22) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

April 12, 2010

Prediction Markets, Idea Sharing, and So On

Email This Entry

Posted by Derek

Everyone who works for a large organization has to wonder about the amount of expertise in it that never gets used. Someone else in the company may have had to solve the exact same problem you're working on, and you may well never know, because there's no way to realize it or track down a person who could help. So there are all sorts of schemes that have been tried to make these connections, but I'm not sure if any of them actually work.

The ones I've seen are e-mail lists (for querying members of the rest of the department), attempts at occasional general-problem-solving meetings, various collaborative software packages, intra-company Wikipedia-type databases, and internal prediction markets. That covers a wide range, probably because these tools are being used against a pretty heterogeneous set of problems.

There's the specific-answer sort of query, such as "Has anyone taken this intermediate that we're all using and done X to it?". A broader form of this one is along the lines of "Does anyone know how to reduce an A group in the presence of a B?". Then there are historical questions, such as "Whatever happened to these Z kinds of compounds? Why did the team that was using them back in 1990 stop working on them?"

These, at least, probably only need to go to a certain list of people. Tougher are the problems where insights might come from anywhere in the company, and this is where the advertising copy for the software packages starts to wax lyrical. Then you have the wisdom-of-crowds approach, where you're not looking for a specific answer to a specific problem, but are interested in the opinions of a wide range of people on some question, hoping to find out more about it than you'd realize on your own. That's where the prediction market stuff comes in.

And I'm interested in the latter idea, although I can see some problems with it. For one thing, I'm pretty sure that you'd want to have anonymity as an option. If the Big Honcho proposes an idea, how many people will vote it down under their own names? (Although any Big Honcho should realize that some of the most valuable feedback they can get is when their own name and position aren't yet attached to a proposal). Then you have the whole participation problem - people have to feel that it's somehow worth their time to use these things. Depending on free-floating altruism is, in my experience, not going to work out very well (and I'm not so sure how it worked out for Blanche DuBois in the end, either).

And with any of these systems, you have to be sure that you're asking the right questions, in the right format, to the right people. A prediction-market question inside a company on the lines of "When do you think we're going to file the NDA for Compound X?" doesn't seem all that useful, because there are only a few people really in a position to know (and they're not supposed to talk about it). But it would be interesting to put up the best screening hits for a nascent program, or the three or four most advanced compounds for a later-stage one, and throw the question open to the whole company of which ones they think are best. You'd want to track the results, though, to see if your crowd has any particular wisdom or not.

I think that the line you're trying to walk with such systems is the one between solidifying groupthink and getting past it. To that end, I'd recommend (in many cases) that people not be able to see how the voting is going while it's in progress, for fear that some participants would just jump on one bandwagon or another to save time. But I have to think that if, say, Pfizer had asked more people about the prospects for Exubera (their catastrophic inhaled-insulin product), that it might have given them the ghost of a clue that there was a chance for failure. (Or maybe not!)

Of course, now that I think about it, Pfizer has one of the more widely publicized internal idea-and-prediction-sharing efforts in the industry. (Lilly is also known for talking about this sort of thing). And I'd be interested in people who have actually experienced these, or those in other shops. Have you ever gotten any use out of these things? Or is it just something that sounds good on paper?

Comments (19) + TrackBacks (0) | Category: Business and Markets | Lowe's Laws of the Lab | Who Discovers and Why

March 12, 2010

Garage Biotech

Email This Entry

Posted by Derek

Freeman Dyson has written about his belief that molecular biology is becoming a field where even basement tinkerers can accomplish things. Whether we're ready for it or not, biohacking is on its way. The number of tools available (and the amount of surplus equipment that can be bought) have him imagining a "garage biotech" future, with all the potential, for good and for harm, that that entails.

Well, have a look at this garage, which is said to be somewhere in Silicon Valley. I don't have any reason to believe the photos are faked; you could certainly put your hands on this kind of equipment very easily in the Bay area. The rocky state of the biotech industry just makes things that much more available. From what I can see, that's a reasonably well-equipped lab. If they're doing cell culture, there needs to be some sort of incubator around, and presumably a -80 degree freezer, but we don't see the whole garage, do we? I have some questions about how they do their air handling and climate control (although that part's a bit easier in a California garage than it would be in a Boston one). There's also the issue of labware and disposables. An operation like this does tend to run through a goodly amount of plates, bottles, pipet tips and so on, but I suppose those are piled up on the surplus market as well.

But what are these folks doing? The blog author who visited the site says that they're "screening for anti-cancer compounds". And yes, it looks as if they could be doing that, but the limiting reagent here would be the compounds. Cells reproduce themselves - especially tumor lines - but finding compounds to screen, that must be hard when you're working where the Honda used to be parked. And the next question is, why? As anyone who's worked in oncology research knows, activity in a cultured cell line really doesn't mean all that much. It's a necessary first step, but only that. (And how many different cell lines could these people be running?)

The next question is, what do they do with an active compound when they find one? The next logical move is activity in an animal model, usually a xenograft. That's another necessary-but-nowhere-near-sufficient step, but I'm pretty sure that these folks don't have an animal facility in the basement, certainly not one capable of handling immunocompromised rodents. So put me down as impressed, but puzzled. The cancer-screening story doesn't make sense to me, but is it then a cover for something else? What?

If this post finds its way to the people involved, and they feel like expanding on what they're trying to accomplish, I'll do a follow-up. Until then, it's a mystery, and probably not the only one of its kind out there. For now, I'll let Dyson ask the questions that need to be asked, from that NYRB article linked above:

If domestication of biotechnology is the wave of the future, five important questions need to be answered. First, can it be stopped? Second, ought it to be stopped? Third, if stopping it is either impossible or undesirable, what are the appropriate limits that our society must impose on it? Fourth, how should the limits be decided? Fifth, how should the limits be enforced, nationally and internationally? I do not attempt to answer these questions here. I leave it to our children and grandchildren to supply the answers.

Comments (42) + TrackBacks (0) | Category: Biological News | Drug Assays | General Scientific News | Regulatory Affairs | Who Discovers and Why

February 18, 2010

Biology By the Numbers

Email This Entry

Posted by Derek

I've been meaning to write about this paper in PNAS for a while. The authors (from Cal Tech and the Weizmann Institute) have set up a new web site, are calling for a more quantitative take on biological questions. They say that modern techniques are starting to give up meaningful inputs, and that we're getting to the point where this perspective can be useful. A web site, Bionumbers, has been set up to provide ready access to data of this sort, and it's well worth some time just for sheer curiosity's sake.

But there's more than that at work here. To pick an example from the paper, let's say that you take a single E. coli bacterium and put it into a tube of culture medium, with only glucose as a carbon source. Now, think about what happens when this cell starts to grow and divide, but think like a chemist. What's the limiting reagent here? What's the rate-limiting step? Using the estimates for the size of a bacterium, its dry mass, a standard growth rate, and so on, you can arrive at a rough figure of about two billion sugar molecules needed per cell division.

Of course, bacteria aren't made up of glucose molecules. How much of this carbon got used up just to convert it to amino acids and thence to proteins (the biggest item on the ledger by far, it turns out), to lipids, nucleic acids, and so on? What, in other words, is the energetic cost of building a bacterium? The estimate is about four billion ATPs needed. Comparing that to those two billion sugar molecules, and considering that you can get up to 30 ATPs per sugar under aerobic conditions, and you can see that there's a ten to twentyfold mismatch here.

Where's all the extra energy going? The best guess is that a lot of it is used up in keeping the cell membrane going (and keeping its various concentration potentials as unbalanced as they need to be). What's interesting is that a back-of-the-envelope calculation can quickly tell you that there's likely to be some other large energy requirement out there that you may not have considered. And here's another question that follows: if the cell is growing with only glucose as a carbon source, how many glucose transporters does it need? How much of the cell membrane has to be taken up by them?

Well, at the standard generation time in such media of about forty minutes, roughly 10 to the tenth carbon atoms need to be brought in. Glucose transporters work at a top speed of about 100 molecules per second. Compare the actual surface area of the bacterial cell with the estimated size of the transporter complex. (That's about 14 square nanometers, if you're wondering, and thinking of it in those terms gives you the real flavor of this whole approach). At six carbons per glucose, then, it turns out that roughly 4% of the cell surface must taken up with glucose transporters.

That's quite a bit, actually. But is it the maximum? Could a bacterium run with a 10% load, or would another rate-limiting step (at the ribosome, perhaps?) make itself felt? I have to say, I find this manner of thinking oddly refreshing. The growing popularity of synthetic biology and systems biology would seem to be a natural fit for this kind of thing.

It's all quite reminiscent of the famous 2002 paper (PDF) "Can A Biologist Fix a Radio", which called (in a deliberately provocative manner) for just such thinking. (The description of a group of post-docs figuring out how a radio works in that paper is not to be missed - it's funny and painful/embarrassing in almost equal measure). As the author puts it, responding to some objections:

One of these arguments postulates that the cell is too complex to use engineering approaches. I disagree with this argument for two reasons. First, the radio analogy suggests that an approach that is inefficient in analyzing a simple system is unlikely to be more useful if the system is more complex. Second, the complexity is a term that is inversely related to the degree of understanding. Indeed, the insides of even my simple radio would overwhelm an average biologist (this notion has been proven experimentally), but would be an open book to an engineer. The engineers seem to be undeterred by the complexity of the problems they face and solve them by systematically applying formal approaches that take advantage of the ever-expanding computer power. As a result, such complex systems as an aircraft can be designed and tested completely in silico, and computer-simulated characters in movies and video games can be made so eerily life-like. Perhaps, if the effort spent on formalizing description of biological processes would be close to that spent on designing video games, the cells would appear less complex and more accessible to therapeutic intervention.

But I'll let the PNAS authors have the last word here:

"It is fair to wonder whether this emphasis on quantification really brings anything new and compelling to the analysis of biological phenomena. We are persuaded that the answer to this question is yes and that this numerical spin on biological analysis carries with it a number of interesting consequences. First, a quantitative emphasis makes it possible to decipher the dominant forces in play in a given biological process (e.g., demand for energy or demand for carbon skeletons). Second, order of magnitude BioEstimates merged with BioNumbers help reveal limits on biological processes (minimal generation time or human-appropriated global net primary productivity) or lack thereof (available solar energy impinging on Earth versus humanity’s demands). Finally, numbers can be enlightening by sharpening the questions we ask about a given biological problem. Many biological experiments report their data in quantitative form and in some cases, as long as the models are verbal rather than quantitative, the theor y will lag behind the experiments. For example, if considering the input–output relation in a gene-regulatory net work or a signal- transduction network, it is one thing to say that the output goes up or down, it is quite another to say by how much.

Comments (47) + TrackBacks (0) | Category: Biological News | Who Discovers and Why

December 11, 2009

Munos On Big Companies and Small Ones

Email This Entry

Posted by Derek

So that roughly linear production of new drugs by Pfizer, as shown in yesterday's chart, is not an anomaly. As the Bernard Munos article I've been talking about says:

Surprisingly, nothing that companies have done in the past 60 years has affected their rates of new-drug production: whether large or small, focused on small molecules or biologics, operating in the twenty-first century or in the 1950s, companies have produced NMEs at steady rates, usually well below one per year. This characteristic raises questions about the sustainability of the industry's R&D model, as costs per NME have soared into billions of dollars.

What he's found, actually, is the NME generation at drug companies seems to follow a Poisson distribution, which makes sense. This behavior is found for systems (like nuclear decay in a radioactive sample) where there are a large number of possible events, but where individual ones are rare (and not dependent on the others). A Poisson process also implies that there's some sort of underlying average rate, and that the process is stochastic - that is, not deterministic, but rather with a lot of underlying randomness. And that fits drug development pretty damned well, in my experience.

But that's just the sort of thing, as I've pointed out, that the business-trained side of the industry doesn't necessarily want to hear about. Modern management techniques are supposed to quantify and tame all that risky stuff, and give you a clear, rational path forward. Yeah, boy. The underlying business model of the drug industry, though, as with any fundamentally research-based industry, is much more like writing screenplays on spec or prospecting for gold. You can increase your chances of success, mostly by avoiding things that have been shown to actively decrease them, and you have to continually keep an eye out for new information that might help you out. But you most definitely need all the help you can get.

As that Pfizer chart helps make clear, Munos is particularly not a fan of the merge-your-way-to-success idea:

Another surprising finding is that companies that do essentially the same thing can have rates of NME output that differ widely. This suggests there are substantial differences in the ability of different companies to foster innovation. In this respect, the fact that the companies that have relied heavily on M&A tend to lag behind those that have not suggests that M&A are not an effective way to promote an innovation culture or remedy a deficit of innovation.

In fact, since the industry as a whole isn't producing noticeably more in the way of new drugs, he suggests that one possibility is that nothing we've done over the last 50 years has helped much. There's another explanation, though, that I'd like to throw out, and whether you think it's a more cheerful one is up to you: perhaps the rate of drug discovery would actually have declined otherwise, and we've managed to keep it steady? I can argue this one semi-plausibly both ways: you could say, very believably, that the progress in finding and understanding disease targets and mechanisms has been an underlying driver that should have kept drug discovery moving along. On the other hand, our understanding of toxicology and our increased emphasis on drug safety have kept a lot of things from coming to the market that certainly would have been approved thirty years ago. Is it just that these two tendencies have fought each other to a draw, leaving us with the straight lines Munos is seeing?

Another important point the paper brings up is that the output of new drugs correlates with the number of companies, better than with pretty much anything else. This fits my own opinions well (therefore I think highly of it): I've long held that the pharmaceutical business benefits from as many different approaches to problems as can be brought to bear. Since we most certainly haven't optimized our research and development processes, there are a lot of different ways to do things, and a lot of different ideas that might work. Twenty different competing companies are much more likely to explore this space than one company that's twenty times the size. Much of my loathing for the bigger-bigger-bigger business model comes from this conviction.

In fact, the Munos paper notes that the share of NMEs from smaller companies has been growing, partly because the ratio of big companies to smaller ones has changed (what with all the mergers on the big end and all the startups on the small end). He advances several other possible reasons for this:

It is too early to tell whether the trends of the past 10 years are artefacts or evidence of a more fundamental transformation of the drug innovation dynamics that have prevailed since 1950. Hypotheses to explain these trends, which could be tested in the future, include: first, that the NME output of small companies has increased as they have become more enmeshed in innovation networks; second, that large companies are making more detailed investigations into fundamental science, which stretch research and regulatory timelines; and third, that the heightened safety concerns of regulators affect large and small companies differently, perhaps because a substantial number of small firms are developing orphan drugs and/or drugs that are likely to gain priority review from the FDA owing to unmet medical needs.

He makes the point that each individual small company has a lower chance of delivering a drug, but as a group, they do a better job for the money than the equivalent large ones. In other words, economies of scale really don't seem to apply to the R&D part of the industry very well, despite what you might hear from people engaged in buying out other research organizations.

In other posts, I'll look at his detailed analysis of what mergers do, his take on the (escalating) costs of research, and other topics. This paper manages to hit a great number of topics that I cover here; I highly recommend it.

Comments (41) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

November 3, 2009

That Didn't Take Very Long

Email This Entry

Posted by Derek

Back in late September I wrote about a controversial paper in the Proceedings of the National Academy of Sciences. It attracted comment for its way-out-there hypothesis: that caterpillars and other larvae arose through a spectacular interspecies gene transfer rather than through conventional evolutionary processes. And it may have been the last paper to make it into the journal by the now-eliminated "Track III" route, which allowed members to essentially cherry-pick their own reviewers. This paper may well have hastened the disappearance of that system, actually - it created quite an uproar.

At the time, I wrote that the paper's hypothesis seemed very likely to be wrong, but at least the author had proposed some means to test it. Now in the latest PNAS come a letter and a full article on the subject. Both mention the testability of the original paper, and go on to point out that such tests have already been done. The paper is written in a tone of exasperation:

Williamson suggested that "many corollaries of my hypothesis are testable." We agree and note that most of the tests have already been carried out, the results of which are readily available in the recent literature and online databases. Here, we set aside (i) the complete absence of evidence offered by Williamson in support of his hypothesis, (ii) his apparent determination to ignore the enormous errors in current understanding of inheritance, gene expression, cell fate specification, morphogenesis, and other phenomena that are implied by his hypothesis, and (iii) the abundant empirical evidence for the evolution and loss of larval forms by natural selection. Instead, we focus on Williamson's molecular genetic predictions concerning genome size and content in insects, velvet worms, and several marine taxa, and we point out the readily available data that show those predictions to be easily rejected.

And you know, they really should set aside those first three points. Entertaining as it is to read this sort of thing, the real way to demolish a paper like Williamson's is to rip it up scientifically, rather than hurl insults at it (however well-justified they might be). There seems to be plenty of room to work in. For example, Williamson predicts that a class of parasitic barnacle will be found to not be barnacles at all, and to have an abnormally large genome, with material from three different sorts of organisms. Actually, though, these organisms have smaller genomes than usual, and from their genes they appear to be perfectly reasonable relatives of other barnacles.

And so on. Williamson predicts that the genomes of insects with caterpillar-like larval stages will tend to be larger than those without, but the data indicate, if anything, a trend in the opposite direction. His predictions for specific insects don't pan out, nor do his predictions about the genome size of velvet worms and many other cases. If I read the paper right, not one of Williamson's many predictions actually goes his way. In some cases, he appears to cite genome size data that line up with his hypothesis, but miss citing similar organisms that contradict it.

So that would appear to be that. Indeed, as the authors of the latest PNAS paper mention, one might have thought so years ago, since these very authors have shot down some of Williamson's work before. That's the real problem here. I have a lot of sympathy for people who are willing to be spectacularly wrong, but that starts to evaporate when they don't realize that they've been spectacularly wrong. Williamson appears to have had a fair hearing for his ideas, and as far as I can tell, they've come up well short. And while we need brave renegades, cranks are already in long supply.

Comments (7) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

October 30, 2009

Fifty Years of Scientific History For You

Email This Entry

Posted by Derek

Here's a most interesting graph from the latest issue of Nature Reviews Drug Discovery. It's from an article on trying to discern trends from broad-scale literature analysis, and it's worth a separate blog post of its own (coming shortly). But after yesterday's discussion of whether there are too many graduates in science and engineering, this looked useful.
Note, for example, the ramp up in NIH funding in the late 1950s/ early 1960s (a very large change in percentage terms), which was followed by a similar surge in doctorates granted. The late-1990s funding increases seem to be having a similar effect near the end of the chart.

Note also the well-publicized drug drought - but the historical perspective is interesting. We've clearly fallen off the 1970-2000 trend line of increasing drug approvals, but we seem to be stabilizing at roughly a 1980s level. The argument is whether that's where we should be or not. We have all these new tools, but all these new worries. Lots of new targets, but fewer good ones like the old days. Many new tools, but plenty of difficult-to-interpret data generated from them. And so on. But 1985 is apparently about where the balance of all these things is putting us.

Comments (34) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

October 29, 2009

The Best Ones Aren't Over Here Any More?

Email This Entry

Posted by Derek

Here's one to get your attention: there's been a lot of arguing (on this blog and others) about the continual talk of shortages of scientists and engineers. That's a little hard to take for the number of people who've been laid off from this industry over the last two or three years and who often are having trouble finding a new position.

A study from Rutgers and Georgetown now says, though, that there is no such shortage. Here's the PDF, so you can check it out for yourself. The intro:

A decline in both the quantity and quality of students pursuing careers in science, technology, engineering, and mathematics (STEM) is widely noted in policy reports, the popular press, and by policymakers. Fears of increasing global competition compound the perception that there has been a drop in the supply of high-quality students moving up through the STEM pipeline in the United States. Yet, is there evidence of a long-term decline in the proportion of American students with the relevant training and qualifications to pursue STEM jobs?

In a previous paper, we found that universities in the United States actually graduate many more STEM students than are hired each year, and produce large numbers of top- performing science and math students. In this paper, we explore three major questions: (1) What is the “flow” or attrition rate of STEM students along the high school to career pathway? (2) How does this flow and this attrition rate change from earlier cohorts to current cohorts? (3) What are the changes in quality of STEM students who persist through the STEM pathway?

What they're finding is (again) that there's no shortage of graduates - in fact, quite th opposite, unfortunately for wages and employment. One worrisome thing, though, is that at some point in the mid-to-late 1990s the top-performing students at both the high school and college level began to jump ship from the science/engineering fields. There are several possible explanations, but the one that comes to mind is that students are looking ahead a bit and don't like the prospects that they see and/or are lured by other fields that seem more attractive.

More on this later - for now, here's some commentary over at Science which shows that the arguing has already begun.

Comments (36) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

September 24, 2009

The Grant Application Treadmill

Email This Entry

Posted by Derek

There's a (justifiably) angry paper out in PLoS Biology discussing the nasty situation too many academic researchers find themselves in: spending all their time writing grant applications rather than doing research. The paper's written from a UK perspective, but the problems it describes are universal:

To expect a young scientist to recruit and train students and postdocs as well as producing and publishing new and original work within two years (in order to fuel the next grant application) is preposterous. It is neither right nor sensible to ask scientists to become astrologists and predict precisely the path their research will follow—and then to judge them on how persuasively they can put over this fiction. It takes far too long to write a grant because the requirements are so complex and demanding. Applications have become so detailed and so technical that trying to select the best proposals has become a dark art.

And a related problem is how this system tends to get rid of people who can't stand it, leaving the sorts of people who can:

The peculiar demands of our granting system have favoured an upper class of skilled scientists who know how to raise money for a big group [3]. They have mastered a glass bead game that rewards not only quality and honesty, but also salesmanship and networking. A large group is the secret because applications are currently judged in a way that makes it almost immaterial how many of that group fail, so long as two or three do well. Data from these successful underlings can be cleverly packaged to produce a flow of papers—essential to generate an overlapping portfolio of grants to avoid gaps in funding.

Thus, large groups can appear effective even when they are neither efficient nor innovative. Also, large groups breed a surplus of PhD students and postdocs that flood the market; many boost the careers of their supervisors while their own plans to continue in research are doomed from the outset. . .

The author is no freshly-minted assistant professor - Peter Lawrence (FRS) has been at Cambridge for forty years, but only recently relocated to the Department of Zoology and experienced the grantsmanship game first-hand. He has a number of recommendations to try to fix the process: shorter and simpler application forms, an actual weighting against large research groups, longer funding periods, limits to the number of papers that can be added to a grant application, and more. Anyone interested in the topic should read the whole paper, and will probably be pounding on the desk in agreement very shortly.

The short version? We think we're asking for scientists, but we're really asking for fund-raisers and masters of paperwork. Surely it doesn't have to be this way.

Comments (21) + TrackBacks (0) | Category: Academia (vs. Industry) | Who Discovers and Why

September 15, 2009

Industrial Research: More Grounded in Reality, or Not?

Email This Entry

Posted by Derek

My post the other day on why-do-it academic research has prompted quite a bit of comment, including this excerpt from an e-mail:

I would also note that mediocrity is hardly limited to academia. I cannot tell you the number of truly dumb things that I continue to see happening in industry, motivated by the need to be doing something - anything - that can be quantified in a report. The idea that industry is where reality takes command is depressingly false, and I would guess that the same thing that distinguishes the best from the rest in academia also applies in the "real world."

Well, my correspondent is unfortunately on target with that one. Industry is supposed to be where reality takes command, but too often it can be where wishful thinking gets funded with investor's cash. I'm coming up on my 20th anniversary of doing industrial drug discovery. I've seen a lot of good ideas and a lot of hard work done to develop them - but I've also seen decisions that were so stupid that they would absolutely frizz your hair. And I'm not talking stupid-in-hindsight, which is a roomy category we all have helped to fill up. No, these were head-in-hands performances while they were going on.

I can't go into great detail on these, as readers will appreciate, but I can extract some recurring themes. From what I've seen the worst decisions tend to come from some of these:

"We can't give up on this project now. Look at all the time and money we've put into it!" This is the sunk-cost fallacy, and it's a powerful temptation. Looking at how hard you've worked on something is, sadly, nearly irrelevant to deciding whether you should go on working on it. The key question is, what's it look like right now, compared to what else you could be doing?

"Look, I know this isn't the best molecule we've ever recommended to the clinic. But it's late in the year, and we need to make our goals." I think that everyone who's been in this business for a few years will recognize this one. It's a confusion of ends. Those numerical targets are set in an attempt to try to keep things moving, and increase the chance of delivering real drugs. That's the goal. But they quickly become ends in themselves, and there's where the trouble starts. People start making the numbers rather than making drugs.

"OK, this series of compounds has its problems. But how can you walk away from single-digit nanomolar activity?" This is another pervasive one. Too many discovery projects see their first job (not unreasonably) as getting a potent compound, and when they find one, it can be hard to get rid of it - even if it has all kinds of other liabilities. It takes a lot of nerve to get up in front of a project review meeting and say "Here's the series that lights up the in vitro assay like nothing else. And we're going to stop working on it, because it's wasting our time".

"Everyone else in the industry is getting on board with this. We've got to act now or be left behind." Sometimes these fears are real, and justified. But it's easy to get spooked in this business. Everyone else can start looking smarter than you are, particularly since you see your own discovery efforts from the inside, and can only see other ones through their presentations and patents. Everyone looks smart and competent after the story has been cleaned up for a paper or a poster. And while you do have to keep checking to make sure that you really are keeping up with the times, odds are that if you're smart enough to realize that you should be doing that, you're in reasonably good shape. The real losers, on the other hand, are convinced that they're doing great.

I'm not sure how many of these problems can be fixed, ours or the ones of academia, because both areas are stocked with humans. But that doesn't mean we can't do better than we're doing, and it certainly doesn't release us from an obligation to try.

Comments (27) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Who Discovers and Why

June 2, 2009

A Deuterium Deal

Email This Entry

Posted by Derek

Well, there's someone who certainly believes in the deuterated-drug idea! GlaxoSmithKline has announced today that they've signed a deal with Concert Pharmaceuticals to develop these. There's a $35 million payment upfront, which I'm sure will be welcome in this climate, and various milestone and royalty arrangements from there on out. I know that the press story says that it's a "potential billion dollar deal", but you have to make a useless number of assumptions to arrive at that figure. Let's just say that the amount will be somewhere between that billion-dollar figure and. . .well, the $35 million that Glaxo's just put up.

Where things will eventually land inside that rather wide range is impossible to say. No one's taken such a compound all the way through development, and every one of them is going to be different. (Deuterium might be a good idea, but it ain't magic.) It looks like the first compound up for evaluation will be an HIV protease inhibitor, CTP-518, which is a deuterated version of someone's existing compound - Concert has filed paten applications on deuterated versions of both darunavir (WO2009055006) and atazanavir (WO2008156632). The hope is that CTP-518 will have an improved enough metabolic profile to eliminate the need to add ritonavir into the drug cocktail.

The company is also providing deuterated versions of three of GSK's own pipeline compounds for evaluation, which is interesting, since that's the sort of thing that Glaxo could do itself. In fact, that's one of the key points to the whole deuterated-compound idea: the window of opportunity. Deuteration isn't difficult chemistry, and the applications for it in improving PK and tox profiles are pretty obvious (see below). It's a good bet that drug company patent applications will hencrforth include claims (and exemplified compounds) to make sure that deuterated versions of drug candidates can't be poached away by someone else. This strategy has a limited shelf life, but it's long enough to be potentially very profitable indeed.

One more note about that word "obvious". Now that people are raising all kinds of money and interest with the idea, sure, it looks obvious. And I'm sure that it's a thought that many people have had before - and then said "Nah, that's too funny-sounding. Might not work. And besides, you might not be able to patent it. And besides, if it were that good an idea, someone else would have already done it. There must be a good reason why no one's done it, you know". Getting up the nerve to try these things, that's the hard part. Roger Tung and Concert (and the other players in this field) deserve congratulations for not being afraid of the obvious.

Comments (25) + TrackBacks (0) | Category: Business and Markets | Drug Development | Infectious Diseases | Pharmacokinetics | Who Discovers and Why

May 15, 2009

Competing (And Competing Unethically?)

Email This Entry

Posted by Derek

Sean Cutler, a biologist at UC-Riverside, is the corresponding author of a paper in a recent issue of Science. That’s always a good thing, of course, and people are willing to go to a lot of trouble to have something like that on their list of publications. But Cutler’s worried that too many scientists, especially academic ones are willing to do a bit too much for that kind of reward. He tells John Tierney at the New York Times that he approached this project differently:

” Instead of competing with my competitors, I invited them to contribute data to my paper so that no one got scooped. I figured out who might have data relating to my work (and who could get scooped) using public resources and then sent them an email. Now that I have done this, I am thinking: Why the hell isn’t everyone doing this? Why do we waste taxpayer money on ego battles between rival scientists? Usually in science you get first place or you get nothing, but that is a really inefficient model when you think about it, especially in terms of the consequences for people’s careers and training, which the public pays for. . .

. . .Obviously there is a balance between self and community interests, but as it stands there are very few metrics of scientific “niceness” and few ways to reward community-minded scientists (some grants consider “broader impact,” but that is not the same thing). What is even worse, is there are even fewer mechanisms for punishing selfish (sometimes horribly so) scientists. If it were their own money or private money they were spending on their research — fine, they can be as selfish as they want and hold others up. But 99 times out of 100, it’s not their money- it’s the public’s money and it drives me absolutely crazy that there is no meaningful oversight of behavior.

That brought in a flood of comments, and Teirney followed up a couple of days later. Addressing the general issue of scientific competition, which is where many of the comments took issue, Cutler added:

” I am in full favor of competition. My message is: Compete ethically. Sadly, there is a lot of unethical competition that goes on in science. This year alone, I have heard of cases that are the scientific equivalent of insider trading, where reviewers of important papers exploit their access to privileged data to gain unfair advantages in the “race” to the next big discovery. I have heard of researchers being ignored when they request published materials from scientists.

Not sending materials described in papers or exploiting privileged information is a clear violation of journal policies, but unethical behavior of this kind is common in science and is usually perpetrated with a proud smile in the name of “competition. . .”

Well, he’s right that this sort of thing goes on all the time in academia. I don’t know how many tales I’ve heard of pilfered grant application ideas, shady conduct when refereeing papers, and so on. To tell you the truth, though, you don’t see so much of that in industry, at least not in the discovery labs. It’s not that we’re just better human beings over here, mind you – it’s that the system doesn’t allow people to profit so much by that particular sort of conduct. Patent law is one big reason for that, as are the sheer number of lawyers that corporations can bring to bear on someone if they feel that they’ve been wronged. There’s more money involved, in every way, so the consequences of being caught are potentially ruinous.

Update: does this mean I've never worked with sleazeballs? Not at all! Credit-stealing and the like does happen in industrail research labs; they're staffed with humans. But direct theft of someone else's work - that's rare, because being inside an organization is the academic equivalent of being inside the same research group, and it's harder to get away with blatant theft. Academic lab vs. academic lab, though, is more the equivalent of "company vs. company", and (at least in the researchstage of things) we have far fewer opportunities for chicanery in industry at that level.

Anyway, unethical conduct in industrial research, when it happens, tends to occur closer to the sources of the money – over in the marketing department, say, or perhaps regulatory affairs. In academia, grants are the source of money, with high-profile publications closely tied to them. The sharp operators naturally tend to concentrate there, like ants around honey.

Cutler’s proposed solution is to go right to that source:

My call to scientists, journals and granting agencies is this: What I’d like to see implemented are rewards for ethical behavior and consequences for unethical behavior. If you knew you might not get a grant funded because you had a track record of unethical practices, then you’d start behaving. It is not much more complicated than that. The journal Science has a “reviewer agreement” that bars the unsavory behavior I described above. After my discussion of the matter with Bruce Alberts, editor in chief of Science, it is clear to me that Science considers the matter very important, but that the journal currently lacks a written policy on the consequences for ethical violations of the reviewer agreement. Without clearly advertised consequences, why behave?

My take is that two issues are being mixed here, which is the same difficulty that led to Tierney having to address this story twice. The first issue is unethical behavior, and I’m with Cutler on that one. There’s too much of that stuff around, and the reason it doth prosper is that the risk/benefit ratio is out of whack. If there were stiffer (and more sure) consequences for such things, people would act on their underhanded impulses less frequently. And for the kinds of people who do these things, the only factors that really matter to are money and prestige, so hit ‘em there, where they can feel it.

But the second issue is competition versus cooperation, and that’s another story. Prof. Cutler’s points about wasting grant money don’t seem to me to necessarily have anything to do with unethical behavior. It’s true that holding back cell lines and the like is slimy, and does impede progress (and waste public money). But without going much further, you could talk about waste when you have multiple research groups working on the same problem, even when they’re all behaving well.

That’s what went on here, if I understand the situation. Cutler basically went out to several other groups who were pursuing the same thing (abscisic acid signaling) through different approaches, and said “Hey folks, why don’t we get together and form one great big research team, rather than beat each other up?” I certainly don’t think that he was expected these other labs to do something sleazy, nor was he trying to save them from temptation.

And the problem there is (as many of Tierney’s commentors said) that competition is, overall, good for scientific progress, and that it doesn’t have to involve unethical conduct. (More on this in a follow-up post; this one’s long enough already!) That’s why Cutler had to go back and clarify things, by saying “Compete, but compete ethically”. The difficulty with talking about all this at the same time is that the groups he ended up collaborating with were (presumably) doing just that. They’re two separate issues. Both topics are very much worth discussing, but not tangled together.

Comments (18) + TrackBacks (0) | Category: Academia (vs. Industry) | The Dark Side | Who Discovers and Why

May 13, 2009

Takeda Evaluating Scientists on "Quality"?

Email This Entry

Posted by Derek

This may be just the Japanese equivalent of HR-speak, but I would like to know what it means:

Takeda Pharmaceutical Co., Asia’s biggest drugmaker, will change its research and development policy by providing compensation to scientists based on the quality of their work.

“We focused too much on the quantity and speed of research and development, which didn’t necessarily bring results,” company President Yasuchika Hasegawa told reporters at a briefing in Tokyo. “I want to change everyone’s mindset.”

The problem is, evaluating the quality of individual scientific performance is notoriously difficult. Or was the problem that Takeda was too focused on the numbers, and has decided to back off on that? (I think I could be behind that initiative). Perhaps some one at Millennium could comment on whether they've heard anything about this?

Comments (24) + TrackBacks (0) | Category: Who Discovers and Why

May 1, 2009

Genentech: Let's Hope He's Right

Email This Entry

Posted by Derek

Science has a short interview with Richard Scheller, who will be running R&D now at Genentech. A highlight:

Q: How do you plan to maintain the famous Genentech culture?

R.S.: By making sure that scientists continue to have time to work on their own projects that aren't translational, that aren't governed in any specific way, and that scientists have time to think and imagine and invent, not just do routine things.

Comments (25) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

April 15, 2009

Roche Starts to Manage Things

Email This Entry

Posted by Derek

So is Roche already flexing its muscles now that the acquisition of Genentech is complete? Reports are this morning that Genentech's CEO, Arthur Levinson, is moving aside for Roche's Pascal Soriot, and several other top executives are leaving as well.

This does not seem like the way to reassure the Genentech folks that Roche is going to leave them in peace, to put it gently. And the sorts of comments that are out there in the press reports can't be helping, either. As that Bloomberg story has it:

The changes begin the company’s transformation to a team- oriented culture from one that supports individual scientific enterprise, said Stephen Burrill, a venture capitalist who invests in biotechnology companies.

He says that like individual scientific enterprise is a bad thing. Update: out of context, perhaps? See the comments section.)

And if that's indeed what made Genentech what it is, then you'd think we need more of it, because (remember) it's a very successful company indeed. I'm always wary of people talking about "team-oriented culture", too. That sounds too much like HR-speak for comfort. And while drug discovery necessarily has to be done by large teams of people, it's the individuals who come up with the ideas. And it's the individuals that push their ideas forward, sometimes in the face of opposition from other individuals who think that they're completely wrong.

That's how new things get tried, and how we sort out what works and what doesn't. Too often, a lot of talk about "team culture" can be the sign of an organization that doesn't value initiative as much as it should. You don't want a bunch of people shouting at each other all the time and refusing to work together, true - but you don't want a situation where no one can do anything without everyone joining hands. A lot of really good ideas don't seem like good ideas to everyone at the time.

So I can't say that I'm happy to read today's news. We'll see what it really means. If Roche themselves start talking about changing Genentech's culture, then all bets are off.

Update: "It will never work because if we owned all of Genentech we would kill it"

Comments (16) + TrackBacks (0) | Category: Who Discovers and Why

February 23, 2009

Genentech's Culture: At Risk or Not?

Email This Entry

Posted by Derek

This article from the San Jose Mercury News has gotten a lot of attention for its take on the Roche-Genentech struggle. The reporter, Steve Johnson, is asking if all the concerns about Genentech's fate are overdone.

It's true that the precious-unique-culture stuff can be overemphasized. Roche has indeed been insisting that they want to preserve Genentech's entrepreneurial spirit (although, to be honest, they'd say that no matter what they were really thinking - what are they going to do, say that they really just want all the Avastin revenue and whatever else is high up in the pipeline?) And, as the article correctly points out, there have been any number of good-sized biotech outfits taken over by Big Pharma over the years.

But what worries me a bit is what's happened to some of those biotechs. It really is rare, from what I can see, for a company's culture to stay the same after something like this happens. It's a bit like those singers who make it big from obscurity; you read these articles saying that they're just the same small-town person that they always were. Right - that would be the least likely outcome of them all.

The thing is, the atmosphere of the acquiring company is going to seep in, no matter what. The new projects are going to be approved using the processes of the larger company, aren't they? They'll be expected to fit into a new, larger picture, and to find their place. And the compounds that advance will advance against the larger company's criteria, not the ones in place under the old regime.

Those are just the direct effects on research. What might be a larger difference is a psychological one. As a stand-alone company, even one the size of Genentech, you live by your own wits, but that changes. As part of a larger company, you know that there are other projects out there, other divisions, and that some of these will be expected to pick up the slack now and then. It's a big company, after all. It'll keep going, even if you don't deliver this year. Right? That's actually one of the trickier parts about running a company with a lot of sites and research areas - the inevitable frictions when one group or another feels (sometimes correctly) that they're being leaned on more than those lazy bums over in XYZ, who haven't delivered a clinical candidate since (fill in the year).

At more than one of my previous jobs, I've heard a lot about a "sense of urgency", and how desirable that is. (That's mostly true, although too much of it can perhaps cause you to do something stupid under time pressure). Overall, it really does help to know that you really do have to deliver, that there's no net down there, no one waiting to cushion the blow. It doesn't make things fun, not necessarily, but it does make them more productive. Remember Samuel Johnson's remark about the minister-turned-forger William Dodd: "Depend upon it, Sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully."

Unfortunately, I think the key line in the Mercury News piece is this one:

Besides, Genentech scientists don't have a lot of other employment options these days, according to Rodman & Renshaw analyst Christopher James. "There would be more of a concern in a market where there were a lot of opportunities for people to leave," he said.

There's the rub, all right. . .

Comments (20) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

February 12, 2009

Autism and Vaccines: Boiling Over Yet Again

Email This Entry

Posted by Derek

As you may well have heard by now, Ben Goldacre over at Bad Science has been involved in a wonderful altercation with both the anti-vaccination people there and with one of London’s big talk radio stations, LBC. And yes, this is happening just as Andrew Wakefield, one of the originators of the whole MMR vaccine flap, is being accused of falsifying data to make his case.

The full story can be found on Goldacre’s blog; see the link above for a starting point. The short version: LBC allowed Jeni Barnett, an outspoken opponent of vaccination, to vent her views for some 45 minutes in a prominent time slot. As Goldacre points out, she seems to have covered every possible anti-vaccine trope, despite the fact that some of them were mutually contradictory and many of them made little sense to start with. The British media – many parts of it, anyway – has not covered itself in glory on the whole vaccine-risk story, and this latest outburst was too much for Goldacre to take.

He posted the entire audio of the LBC show on his website, and that brought on threats of legal action from the radio station. And that move, as anyone who’s hung around the internet can tell you, made sure that the audio was immediately scattered around the world, with commentary, transcripts, and plenty of bad publicity. (You can find plenty of links to all of it here; I’m late to this particular party myself).

Goldacre makes an important point, one that’s been made before but has to be kept in mind when you’re listening to the news coverage of any disputed issue. He quotes Jeni Barnett as:

”. . . explaining endlessly that all she wanted to do was “start a debate” (because in the media everything is 50:50, and the truth lies exactly half way between the two most extreme views)

He's right; you run into that sort of thing all the time – readers who’ve had occasion to deal with Intelligent Design people and other creationists will recognize it immediately. “Teach the controversy” "Let's hear both sides of the debate", and all that. It’s another example of the disconnect between science works (or should work) and the political and social arenas. There are some big differences in the way disputes are resolved.

One of them is that, to a certain degree, questions do not remain open in scientific debate in the same way they do in politics. Fistfights are currently erupting over whether Keynes had a point about deficit spending in a recession (and if he did, how much is appropriate and in what way). Huge, ever-inflamed arguments take place over welfare, regulatory policy, defense spending, and other perennials. There are more than two sides to these kinds of issues. But come over here to the scientific world, where gravity really does diminish as the square of the distance between two objects; bacteria really do cause infections; sodium really reacts with water and yes, living organisms do evolve and change over time. Proclaiming that you disagree with these things just because you don’t like them, just think that they’re wrong, or don’t happen to believe them will get you nowhere in scientific debate. (That’s as opposed to political or religious debates, where those are all-too-common starting points).

But, at the same time, every question in science is potentially open. Look at all those facts I listed above – you can find ways around all of them. Gravity stops behaving in a perfect inverse-square way close to large masses. Not all bacteria cause infections, of course, and not all infections are caused by bacteria - and some bacteria that might kill one person could cause no problems for someone else. Sodium doesn’t do anything spectacular at all when it’s in the plus-one oxidation state, and even the metal probably doesn’t do much when exposed to water at, say, three degrees Kelvin. And organisms evolve at startlingly different rates and through a variety of mechanisms.

These two simultaneous principles – that questions really do get answered, but that the answers are always open to question – are what puzzle a lot of people about science. And they don’t fit well with the way that many people are used to arguing about issues. They can dwell on the first point and whack the scientific community over the head for having closed minds and unchallenged dogmas, or dwell on the second and claim that hey, they're all unproven theories, and here are some more theories to put on the table while we're at it.

But if you’re going to challenge some science that we think we understand, you’re going to have to bring the data. The bigger the topic, the better the evidence you’re going to need. You can do it – all kinds of cherished theories have gone down – but it’s not easy. If you’re going to claim that evolution doesn’t happen, or that we’re thinking about it all wrong, you’d better have some really impressive evidence (and coming up with an alternative with the same kind of explanatory power would help, too). If you’re going to claim that vaccines do more harm than good, or that they’re the cause of a specific terrible condition, you’d better have the numbers to back it up, not a mish-mosh of talking points.

Einstein’s work, for example, has stood up against all comers, taking on all kinds of extraordinarily painstaking experimental tests and passing every single one of them. If you’re going to beat relativity, you’re going to have to show up with absolutely epic skills. And that brings up a last point. When Einstein explained Mercury’s orbit (and more besides), he didn’t come in proclaiming that Newton was an idiot and that he’d gotten it all wrong. Isaac Newton, though an exceptionally weird human being, was very far indeed from being an idiot. No, relativity shows how under “normal” circumstances, Newton’s gravitational laws work wonderfully. Then it shows under what conditions they go off track, and predicts when that will happen and exactly to what degree. If you’re going to proclaim any new way of looking at the scientific evidence, you’re going to have to show how your breakthrough allows for something new to be seen, and you’re going to have to call your shots and be ready for the experimentalists to have a crack at you.

I find all this wonderfully exciting, and I've devoted my career to it. But it doesn't necessarily make for a quick TV or radio segment that will bring in a big audience, stir up a lot of noise and chatter, and (most importantly) raise the advertising rates. For that, you want politics, religion, or some tasty mixture thereof. . .

Comments (28) + TrackBacks (0) | Category: Autism | Current Events | Press Coverage | Who Discovers and Why

January 30, 2009

10,000 Hours To Drug Discovery?

Email This Entry

Posted by Derek

I see a fair number of people reading Malcolm Gladwell’s Outliers while I’m commuting. I haven’t read it myself yet, but it seems that a key feature of his case is the “10,000 hour rule”, the idea that many people who are extremely good at a given task have spent at least that long perfecting their skills. (This derives from the work of Anders Ericsson and Herb Simon over the last thirty or forty years).

One of the more perceptive reviews of Gladwell’s book I’ve read is by Michael Nielsen. He brings up the problems that many scientists have had with some of Gladwell’s past work, and according to him, if you weren’t happy with Blink or The Tipping Point, you may well not be happy with this one. (Of course, if you didn’t like those, you probably won’t read this one!) Of course, as a scientist, when you read about something like the ten-thousand-hour rule, the first thing you ask yourself is “Hmm. I wonder if that’s true?” But as Nielsen points out:

There are, of course, many provisos to the 10,000 hour rule. As just one example, to acquire mastery in an area, it’s not enough to just practice for 10,000 hours; the person practicing must constantly strive to get better. Someone who practices without pushing themselves will plateau, no matter how many hours they practice. I suspect many scientists fall afoul of this proviso, putting in enormous hours, but mostly doing administrative or drudge work which doesn’t extend their abilities.

But that said, Nielsen goes on to talk about some scientists who have done great work well before their 10,000 hour mark. These people were working at discontinuities, sudden discoveries that didn’t necessarily build on the past, so they didn’t have as much of a tradition or art to master. A thorough grasp of 19th-century physics didn’t help people much when it came time for quantum mechanics. It wouldn’t be surprising, Nielsen says, if a disproportionate number of great discoveries in science fell into this category.

Now, which category does drug discovery fall into? We have a fair amount of art to be learned and experience to be gained, true. But there’s another factor that confounds things: sheer luck. I think that the fundamental issues of drug design are still so poorly understood that no amount of skill can compensate for them. I’m thinking of difficulties like designing compounds that have good oral absorption or blood-brain barrier penetration – sure, there are guidelines, and there are things that you learn to avoid, but once past those it’s a crap shoot. And then there’s toxicity – you learn pretty quickly not to put known landmine groups into your molecules, but after that, you just have to cross your fingers and hope for the best.

These things also mean that there’s a good amount of work to be done that doesn’t extend a person’s abilities, as the quote above has it. The worst of it is being outsourced these days, the well-known “methyl ethyl butyl futile” stuff, but there’s still a lot of pickaxe work that has to be done in any drug project. It would be a fine thing if ten thousand hours of hard work and practice allowed someone to come in and make nontoxic molecules, but they often have to be discovered by trial and error, and more of the latter.

That said, I take Nielsen’s point about putting in good hours rather than empty ones. As much as possible, I think that we should try to do things that we haven’t done before, learn new skills, and move into untried areas. Try not to get butyl-futiled if you can possibly avoid it; it’s not going to do you much good, personally, to set up another six or eight EDC couplings. There are times that that’s exactly what needs to be done, but don’t set them up just because you can’t think of anything else. This gets back to the point I’ve made about making yourself valuable; anyone can set up amide reactions, unfortunately. Maybe some of the time we spend learning our trade is spent learning how to avoid falling into all the tar pits and time-wasting sinkholes we have.

Comments (23) + TrackBacks (0) | Category: Who Discovers and Why

January 28, 2009

Science and Its Values

Email This Entry

Posted by Derek

Dennis Overbye had an essay in the science section of the New York Times yesterday, entitled "Elevating Science, Elevating Democracy". That gets across the spirit of it pretty well; it's one of those soaring-rhetoric pieces. It starts off with a gush of at-last-we-have-Obama, but what op-ed in the Times doesn't these days? We're going to be sweeping that stuff into piles and pulling it down out of the trees for months. (Before sending me an e-mail, keep in mind that I'd have a similar reaction no matter whose name was involved; I'm just not a person with high expectations from politicians).

But once he gets past the genuflections, I don't disagree with Overbye's main points. He says that science has a reputation of being totally results-oriented and value-neutral, but wants to point out that there are values involved:

"Those values, among others, are honesty, doubt, respect for evidence, openness, accountability and tolerance and indeed hunger for opposing points of view. These are the unabashedly pragmatic working principles that guide the buzzing, testing, poking, probing, argumentative, gossiping, gadgety, joking, dreaming and tendentious cloud of activity — the writer and biologist Lewis Thomas once likened it to an anthill — that is slowly and thoroughly penetrating every nook and cranny of the world."

We forget what a relatively recent and unusual thing it is, science. In most societies, over most of human history, there hasn't been much time or overhead for such a pursuit. And even when there has, most of the time the idea that you could interrogate Nature and get intelligible, reproducible answers would have seemed insane. Natural phenomena were thought to be either beyond human understanding, under the capricious control of the Gods, or impossible to put to any use. In retrospect, it seems to have taken so painfully long to get to the idea of controlled one-variable-at-a-time experimentation. Even the ancient Greeks, extraordinary in many respects, had a tendency to regard such things as beneath them.

So let's shed the politics and celebrate the qualities that Overbye's highlighting. Run good, strong, experiments. Run them right, think hard about the results, and don't be afraid of what they're telling you. That's what got us to where we are now, and what will take us on from here.

Update: a comment from Cosmic Variance.

Comments (11) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

January 9, 2009

The Perils of Poor Equipment

Email This Entry

Posted by Derek

The late Peter Medawar once wrote about resources and funding in research, and pointed out something that he thought did a lot more harm than good: various romantic anecdotes of people making do with ancient equipment, of great discoveries made with castoffs and antiques. While he didn’t deny that these were possible, and admitted that you had to do the best with what you had, he held that (1) this sort of thing was getting harder every year as science advanced, and (2) while it was possible to do good work under these conditions, it surely wasn’t desirable.

His most interesting point was that lack of equipment ends up affecting the way that you think about your research. It’s not like people with insufficient resources sit around all day thinking of experiments that they can’t run and can’t analyze. If you know, in the back of your mind and in your heart, that there’s no way to do certain experiments, then you won’t even think about them. Your brain learns to censor out such things. This limits your ability to work out the consequences of your hypotheses, and could cause you to miss something important.

Imagine, say, that you’re working on some idea that requires you to find very small amounts of different compounds in a final mixture. A good LC/MS machine would seem to be the solution for that, but what if you don’t have access to one? You can spend a lot of time thinking about a workaround, which is mental effort that could (ideally) be better applied elsewhere. And if you had the LC/MS at your disposal, you might be led to start thinking about the fragmentation behavior of your compounds or the like, which could lead you to some new ideas or insights – ones that you wouldn’t have if you’d had to immediately cross off the whole area.

If you’re in a resource-limited situation, then, you’ll probably try to carefully pick out problems that can actually be well addressed with what you have. That’s a good strategy, but it’s not always a possible one. Huge areas of research can be marked off-limits by the lack of key pieces of equipment, and by the time you’ve worked out what’s possible, there may not be anything interesting or important left inside your fence. Medawar’s point was that being stuck inside such a perimeter would not only hurt the way that you did your work, but could eventually do damage to the way that you thought.

It occurs to me that this is similar to George Orwell's claim in "Politics and the English Language" that long exposure to cheap, misleading political rhetoric could damage a person's ability to think clearly. "But if thought corrupts language, language can also corrupt thought". There may be other connections between Orwell's points and scientific thinking. . .definitely a subject for a future post.

In fairness, I should mention that the flip side of this situation isn’t necessarily the best situation, either. Having everything you need at your disposal can make some researchers very productive – and can make others lazy. Everyone has stories of beautifully appointed labs that never seem to turn out anything interesting. There’s danger in that direction, too, but it’s of a different kind. . .

Comments (35) + TrackBacks (0) | Category: Academia (vs. Industry) | Life in the Drug Labs | Who Discovers and Why

January 5, 2009

New Year - I Hope!

Email This Entry

Posted by Derek

In past years, around this time I’ve often done a look back at the previous year in the drug industry. I hope that no one will be disappointed if I scuttle that tradition, because honestly, I have no desire whatsoever to relive what drug research went through in 2008. It may have been the toughest year for industry scientists in the modern era – everyone I know struggles to find a comparison.

I’d rather spend my energies on 2009. Let’s just stipulate that 2008 was, on balance, horrendous: what does that tell us? How did we end up in this position, and how can we avoid more of the same? There’s a lot of arguing room in those questions, but I think that we can agree that the proximate cause is that we’re not coming up with enough good drugs. 2008, for all its ugliness, was a handful of good products away from being a decent year. Why were we short that handful?

You have to go back some years to answer a question like that, given the industry’s lead time. The projects that were begun in the mid-to-late 1990s are clearly not coming through in the way that everyone had hoped. Is it that our attrition rate has gone up, or have we just not taken enough things to the clinic, or some of each?

Let’s think about that first problem, which certainly seems to be real enough. Is it that the easy targets have all been worked over, leaving us with only the tough ones? I don’t think that’s the whole explanation, although that’s certainly part of it. Still, even some of the big drugs from years past wouldn’t have made it through our current structures. So are the hurdles set too high during development – that is, do we know too much about potential problems, without having learned a corresponding amount about how to fix them? That’s got to be a big factor, which leads to a New Year’s resolution: try to spend as much time fixing problems as finding them. That’s a hard one to live up to, but it’s a goal to work toward.

And if we’re going to talk about that latter number, we’re going to have to cut through the often artificial “projects advanced” figures that circulate inside companies. Anyone who’s been around this business has seen some long shots (and some outright losers) officially pushed forward just to make some year-end target. Now, long shots are fine. To a good approximation, everything we do is a long shot. And everything has to go to the clinic eventually (or die) – but we have to make sure that we’re not just checking boxes. So that’s another resolution: spend less time kidding ourselves.

Of course, there’s a flip side to the number of compounds going to the clinic. Could it be that we’re being too cautious, because we have too many potential worries (those high hurdles mentioned above)? Should we be taking more things forward? Well, that’s an expensive proposition, the way things are set up now. So here’s another hard-to-live-up-to resolution: find ways to go to the clinic without betting our shirts every time. That’s been a big focus the last few years (biomarkers, etc.), but we need every idea and technique we can think of (microdosing? Simulations, even?). The cost of getting answers in humans is getting too high for us to try out as many ideas as we need to.

And here's a less macro-scale resolution, which I plan to start putting into practice immediately: don't let fear run your research. Try some things that you aren't sure about. Take some chances. Put down some bets. I've got several that I've let sit in the should-I-do-this limbo for too long, and I'm going to do something about that. Join me?

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Who Discovers and Why

November 18, 2008

Cheese Dip and Hydrochloric Acid

Email This Entry

Posted by Derek

One of the more wide-ranging on my “Lowe’s Laws of the Lab” list is this: The secret of success in synthetic chemistry is knowing what you can afford not to worry about.

That’s because you have to have a cutoff somewhere. There are so many potential things that can affect an experiment, and if you have to sweat every one of them out every time, you’re never going to get anything done. So you need to understand enough to know which parts are crucial and which parts aren’t. I think the beginnings of this law came from my days as a teaching assistant, watching undergraduates carefully weigh out a fivefold excess of reagent. Hmm. Did it matter if they were throwing in 4.75 equivalents or 5.25? Well, no, probably not. So why measure it out drop by drop?

Tom Goodwin, the professor responsible for teaching me inmy first organic chemistry course, once advanced his own solution to this problem. Growing weary of the seemingly endless stream of lab students asking him “Dr. Goodwin, I added X by mistake instead of Y. . .will that make a difference?”, he proposed creating “Goodwin’s Book of Tolerances.” I think he envisioned this as a thick volume like one of those old unabridged dictionaries, something that would live on its own special stand down the hall. “That way,” he told me, “when some student comes up and says ‘Dr. Goodwin, I added cheese dip instead of HCl – will that make a difference?’, I can walk over, flip to page thousand-and-whatever, and say ‘No. Cheese dip is fine.’”

According to him, a solid majority of these questions ended with the ritual phrase “Will that make a difference?” And that’s just what a working chemist needs to know: what will, and what won’t. The challenge comes when you’re not sure what the key features of your system are, which is the case in a lot of medicinal chemistry. Then you have to feel your way along, and be prepared to do some things (and make some compounds) that in retrospect will look ridiculous. (As I’ve said before, though, if you’re not willing to look like a fool, you’re probably never going to discover anything interesting at all).

Another challenge is when the parts of the system you thought were secure start to turn on you. We see that all the time in drug discovery projects – that methyl group is just what you need, until you make some change at the other end of the molecule. Suddenly its suboptimal – and you really should run some checks on these things as you go, rather than assuming that all your structure-activity relationships make sense. Most of them don’t, at some point. An extreme example of having a feature that should have been solid turn into a variable would be that business I wrote about the other week, where active substances turned out to be leaching out of plastic labware.

But if you spend all your time wondering if your vials are messing up your reactions, you'll freeze up completely. Everything could cause your reaction to go wrong, and your idea to keel over. Realize it, be ready for it - but find a way not to worry about it until you have to.

Comments (35) + TrackBacks (0) | Category: Lowe's Laws of the Lab | Who Discovers and Why

September 16, 2008

Neil Bartlett, 1932-2008

Email This Entry

Posted by Derek

I’ve neglected to note the death of Neil Bartlett, famous for showing that the noble gases would in fact form chemical bonds. This work was a real triumph, since the great majority of scientific opinion at the time was that such compounds were impossible. Bartlett, though, formed a rather startling compound while working on the platinum fluorides, which he realized was actually a salt of dioxygen. The idea that oxygen would be oxidized to a cation in an isolable salt was weird enough at the time, and Bartlett realized that if this could happen, then the same system should be able to oxidize xenon.

And so it did. It’s difficult to convey how much nerve it takes to do experiments like this. I don’t mean the dangers of working with such reactive fluorine compounds, although that’s certainly not to be ignored. (Bartlett spent much of his career working in this area, and only a skilled experimentalist could do that and remain in one piece). No, it’s actually very hard to get out there on the edge of what’s known and do things as crazy as making salts of oxygen and fluorides of noble gases, Consider that if you’d lined up a hundred high-ranking chemists to vet these experiments beforehand, most of them would have pursed their lips and said “Are you sure that you’re not just wasting your time on this stuff?” It takes nerve, and not everyone has it – but Bartlett did, and he had the brains and the skills to go along with it. You need all three.

There’s a good appreciation of him in Nature, which points out – to my mind, absolutely correctly – that he should have won the Nobel Prize for this work. In fact, I thought he had for a long time, and only a few years ago realized that I had that wrong. (I may have been reinforced in my opinion by a statement in Primo Levi’s The Periodic Table). I think that if you polled chemists as a group, you’d find that a majority would be under the same impression – and if that’s not a sign of the highest-level work, having everyone surprised that you never got a Nobel, then I don’t know what is.

Comments (5) + TrackBacks (0) | Category: Inorganic Chemistry | Who Discovers and Why

September 11, 2008

US and UK Biotech: Growth and Form

Email This Entry

Posted by Derek

There’s an interesting editorial in Nature Biotechnology on a role-playing exercise that took place recently in London. The UK government (in the form of the Bioscience Futures Forum) asked a University of London simulations group to work out what would happen to two identical companies in England and in the US. These would be university spin-offs with promising oncology compounds that had already shown oral activity in tumor models. (Here's the site for the whole effort - I have to say, it looks like an awful lot of effort for a two-day simulation).

What happened? Well, things diverged. The US version of the simulated company was able to raise more money, had better access to collaborations with larger companies, and better chances of going public by the end of the simulation. That gave them a broader platform to deal with setbacks in the original compound program. Meanwhile, the UK company faced this:

. . . the biotech finance marketplace in the United Kingdom is weak. AIM has little liquidity and virtually no follow-on market. Preemption rights allow existing shareholders to block potentially diluting but opportunistic fundraising rounds, such as private investments in public equity. And there is little access to debt capital for biotech firms.

The game also suggests that UK management and investors have mindsets adapted to constrained financial circumstances. They design businesses to fit the financial environment rather than seeking the environment that their business needs. They discount early valuations because of the inflexible later-stage financial circumstances. Their low expectations become self-fulfilling prophecies. In contrast, US management looks to build a sustainable business from the outset, and investors get higher returns as a consequence.

What I found interesting about the editorial, though, wasn’t these conclusions per se – after all, as the piece goes on to say, they aren’t really a surprise. (That makes you wonder even more about the time and money that went into this, but that's another issue). No, the surprise was the recommendation at the end: while the government agency that ran this study is suggesting tax changes, entrepreneur training, various investment initiatives, and so on, the Nature Biotechnology writers ask whether it might not be simpler just to send promising UK ideas to America. Do the science in Great Britain, they say, and spin off your discovery in the US, where they know how to fund these things. You'll benefit patients faster, for sure.

They’re probably right about that, although it’s not something that the UK government is going to endorse. (After all, that means that the resulting jobs will be created in the US, too). But that illustrates something I’ve said here before, about how far ahead the VC and start-up infrastructure is here in America. There’s no other place in the world that does a better job of funding wild ideas and giving them a chance to succeed in the market. The startup culture here a vital part of the economy and a great benefit for the world, and we should make sure to keep it as healthy as we can.

Comments (13) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

August 22, 2008

Open Source Science?

Email This Entry

Posted by Derek

The Boston Globe has a piece on the open-source science movement. Many readers here will have come across the idea before, but it’s interesting to see it make a large newspaper. (Admittedly, the Globe is more likely to cover this sort of thing than most metropolitan dailies, given the concentration of research jobs around here).

The idea, as in open-source software development, is that everything is out in a common area for everyone to see and work on. (Here's one of the biggest examples). Ideas can come from all over, and with progress coming more quickly as many different approaches get proposed, debated, and tried out. I like the idea, in theory. Of course, since I work in industry, it’s a nonstarter. I have absolutely no idea of how you’d reconcile that model with profitable intellectual property rights, and I haven’t seen any scheme yet that makes me want to abandon profit-making IP as the driver of commercial science. Of course, there's always the prize model, which is worth taking seriously. . .

Even for academic science, open source work runs right into the traditional ideas of priority and credit, and the article doesn’t resolve this dilemma. (As far as I can tell, the open-source science advocates haven’t completely resolved it, either). There’s always the lingering (or not-so-lingering) worry about someone scooping your results, and for academia there’s always that little question of grant applications. There have been enough accusations over the years in various fields of people lifting ideas during grant proposal reviews or journal refereeing to make you wonder how well a broader open-source system would work out, given the small but significant number of unscrupulous people out there.

On the other hand, maybe if things were more open in general, there would be less incentive to lift ideas, since the opportunities to do so wouldn’t be so rare. And if someone’s name is associated from the beginning with a given idea, on some open forum, it could make questions of priority easier to resolve. A subsidiary problem, though, is that there are people who are better at generating ideas than executing them – some of these folks, once unchained, could end up with their fingerprints on all sorts of things that they’ve never gotten around to enabling. Of course, that might be a feature rather than a bug: people who generate lots of ideas are, after all, worth having around. And over time, there might well be less of a stigma than there is now for someone else to follow up on these things.

The thing is, science has already been a form of open-source work for hundreds of years now. It’s just that the information has been shared at a later stage, though presentations and publications, rather than being put out there right after it’s been thought up or while it’s being generated. That’s why I always shiver a bit when I read about how long Isaac Newton waited before writing up any of his results – if Edmund Halley hadn’t pressed him to do it, he might never have gotten around to it at all, which would have been a terrible tragedy.

And it’s why stories like those told of physicist Lars Onsager strike me as somehow wrong. Onsager was famous for only publishing his absolute best work – which was pretty damned good – and putting the rest into his copious file cabinets (example here). (A related trait was that he was also apparently incapable of lecturing at any comprehensible level about his work). Supposedly, younger colleagues would come by once in a while and tell him about some interesting thing that they’d worked out, and ask him if he thought it was correct. Onsager would pause, dig through his files, pull out some old unpublished work that the new person had unknowing duplicated, and say “Yes, that’s correct”. It seems to me that you don’t want to do that, withholding potentially useful results for the sake of what is, in the end, a form of vanity.

And although I'm not exactly Lars Onsager, this is as good a time as any to mention that my summer student, who’s finishing up in the lab this week, has been able to generate a lot of interesting data, and that I’m going to be trying to write it up this fall for publication. Readers may be interested to know that this work is based on more ideas I’ve had in the vein of the “Vial Thirty-Three” project detailed here, so with any luck, people will eventually be able to see some of what I’ve been so excited about all this time. And that’s about as open-source as this industrial scientist can get!

Comments (9) + TrackBacks (0) | Category: Birth of an Idea | The Scientific Literature | Who Discovers and Why

July 1, 2008

The Gates Foundation: Dissatisfied With Results?

Email This Entry

Posted by Derek

Well, since last week around here we were talking about how (and how not to) fund research, I should mention that Bill Gates is currently having some of the same discussions. He’s doing it with real money, though, and plenty of it.

The Bill and Melinda Gates Foundation definitely has that – the question has been how best to spend it. They started out by handing out money to the top academic research organizations in the field, just to prime the pump. Then a few years ago, the focus turned to a set of “Grand Challenges”, fourteen of the biggest public health problems, and the foundation began distributing grant money to fight them. But according to this article, from a fellow who’s writing a book on the topic, Gates hasn’t necessarily been pleased with the results so far:

”. . .Gates expected breakthroughs as he handed out 43 such grants in 2005. He had practically engineered a new stage in the evolution of scientific progress, assembling the best minds in science, equipped with technology of unprecedented power, and working toward starkly-defined objectives on a schedule.

But the breakthroughs are stubbornly failing to appear. More recently, a worried Gates has hedged his bets, not only against his own Grand Challenge projects but against how science has been conducted in health research for much of the last century.”

My first impulse on hearing this news is not, unfortunately, an honorable one. To illustrate: I remember a research program I worked on at the Wonder Drug Factory, one that started with a series of odd little five-membered-ring molecules. Everyone who looked them over had lots of ideas about what should be done with them, and lots of ideas about how to make them. The problem was, the latter set of ideas almost invariably failed to work.

This was a terribly frustrating situation for the chemists on the project, because we kept presenting our progress to various roomfuls of people, and the same questions kept coming up, albeit in increasingly irritated tones. “Why don’t you just. . .” We tried that. “Well, it seems like you could just. . .” It seemed like that to us, too, six months ago. “Haven’t you been able to. . .” No, that doesn’t work, either. I know it looks like it should. But it doesn’t. Progress was slow, and new people kept joining the effort to try to get things moving. They’d come in, rolling up their sleeves and heading for the fume hood, muttering “Geez, do I have to do everything myself?”, and a few weeks later you’d find them frowning at ugly NMR spectra next to flasks of brown gunk, shaking their heads and talking to themselves.

I’d gone through the same stage myself, earlier, so my feelings about the troubles of the later entrants to our wonderful project devolved to schadenfreude which, as mentioned, is not the most honorable of emotions. I have to resist the same tendency when reading about the Gates Foundation – sitting back and saying “Hah! Told you this stuff was hard! Didn’t believe it, did you?” isn’t much help to anyone, satisfying though it might be on one level. I’m cutting Bill Gates more slack than I did Andy Grove of Intel, though, since Gates seems to have taken a longer look at the medical research field before deciding that there’s something wrong with it. I note, though, that we now have well-financed representatives of both the hardware and software industries wondering why their well-honed techniques don’t seem to produce breakthroughs when applied to health care.

Now the Gates people are trying a new tactic. The “Explorations” program, announced a few months ago, is deliberately trying to fund people outside the main track of research in its main areas of focus (infectious disease) in an effort to bring in some new thinking. I’ll let Tadataka Yamata of the Gates Foundation sum it up, from the NEJM earlier this year:

”New ideas should not have to battle so hard for oxygen. Unfortunately, they must often do so. Even if we recognize the need to embrace new thinking — because one never knows when a totally radical idea can help us tackle a problem from a completely different angle — it takes humility to let go of old concepts and familiar methods. We have seemed to lack such humility in the field of global health, where the projects related to diseases, such as HIV, malaria, and tuberculosis, that get the most funding tend to reflect consensus views, avoid controversy, and have a high probability of success, if "success" is defined as the production of a meaningful but limited increase in knowledge. As a result, we gamble that a relatively small number of ideas will solve the world's greatest global health challenges. That's not a bet we can afford to continue making for much longer.”

What’s interesting about this is that the old-fashioned funding that Yamata is talking about is well exemplified by the previous Gates Foundation grants. After last week’s discussion here about “deliverables” in grant awards, it’s interesting to look back at the reaction to the 2003-2005 round of “Grand Challenges” funding:

”Researchers applying for grants had to spell out specific milestones, and they will not receive full funding unless they meet them. "We had lots of pushback from the scientific community, saying you can't have milestones," says Klausner. "We kept saying try it, try it, try it." Applicants also had to develop a "global access plan" that explained how poor countries could afford whatever they developed.

Nobel laureate David Baltimore, who won a $13.9 million award to engineer adult stem cells that produce HIV antibodies not found naturally, was one of the scientists who pushed back. "At first, I thought it was overly bureaucratic and unnecessary," said Baltimore, president of the California Institute of Technology in Pasadena. "But as a discipline, to make sure we knew what we were talking about, it turned out to be interesting. In no other grant do you so precisely lay out what you expect to happen."

I have to think, then, that in no other grant are the chances of any breakthrough result so slim. It would be interesting to know what the Gates people think, behind closed doors, of the return they’ve gotten on the first round of grant money, but perhaps the advent of the Explorations program is already comment enough. (One round of Explorations funding has already taken place, but a second round is coming up this fall. You can start your application process here).

The next question is, naturally, how well the Explorations program might work – but that’s a big enough topic for a post of its own. . .

Comments (28) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

June 26, 2008

Funding in the EU: The Simple Way

Email This Entry

Posted by Derek

Today I have the second part of the guest commentary from Zurich's Dr. Theo Wallimann on research funding in Europe. Today he advances his proposal for a new way of funding young scientists:

The definite proof that European Research Programs (such as the FP-6 and FP-7 Framework Programs) are not the sort that basic scientists regard as most useful is the fact that one has to indicate and list so-called "Deliverables". These are research results or products that one wants to or should achieve in the given time period, e.g. being able to express a protein at high levels in bacteria, (Deliverable 1), to purify it to high purity (Deliverable 2) to characterize it by biochemical and biophysical methods (Deliverable 3) and then try to crystallize this protein in order to produce X-ray compatible protein crystals (Deliverable 4).

One then has to provide yearly reports and let the reviewers know whether the goals set were achieved and met in time and whether one could "deliver" as predicted. If one meets one's own prognosis, one is considered a very good scientist who is able to meet one’s Deliverables. In other words, being able to deliver exactly what was predicted is considered good science, at least by the bureaucrats.

But anyone working in the fields of protein crystallization and X-ray crystal structure solving, for example, knows very well that protein crystallization is still an art which often needs a stroke of luck to get good crystals for X-ray studies. This can literally take years, guided first by brute force screening approaches, and if this does not work by intuition and perseverance. All of a sudden, out of the blue, one may be able to grow crystals once, but they sometimes never come again, even if you repeat the experiment under the very same conditions. In those cases you may find out that something subtle has changed, e.g. the battery of the distilled water apparatus was changed and the water quality was thus somewhat different, etc.

I know of an incident where a long, flexible protein should have been crystallized, but many doctoral students and post-docs could not manage to get crystals. After a year or so, a new post-doc came to the lab and started the project from scratch. However, he realized that his predecessors had left many crystallization trials in multi-well plates in the cold room. They must have stayed there for years, some were murky, even greenish and bacteria or algae must have grown in them. The new post-doc could have gotten rid of these murky old plates, but he was smart and clever enough to take his time to look at them.

Lo and behold, he saw crystals in some of them. He opened the micro-chambers with crystals in them, but saved the mother liquor and the buffer drop in which the crystals had grown. Be honest now, how many of you would have done such a thing? But this turned out to be absolutely crucial, for the bacteria or algae that grew in the protein solution drop and mother liquor produced a protease enzyme, which cut the long protein strand somewhere at its most flexible site. The rest of the protein then crystallized. Although this was a somewhat truncated form of the protein in question, the structure of this core could be solved and years later it was the basis for solving the whole protein structure. Why was it so important to save the supernatant and mother liquor? The post-doc cultivated the bacteria or algae, I don't remember exactly, and purified from them the very protease that was cutting the protein at the specific site, such that it then crystallized.

With this tool (the peculiar protease) at hand, he could reproduce what he had seen at first, and was rewarded with protein crystals which otherwise would not have seen at all. I think this is a very nice example of a) serendipity but also of b) a smart experimenter who reacted very cleverly and used foresight in formulating hypothesis that he then could prove to be true.

Would you ever state in a EU funding proposal that you plan to grow protein crystals by letting a protein solution stand around in a messy cold room, in hopes that the right bacteria would grow and nibble the protein’s flexible loop off so that the rest of the protein would crystallize? You would blatantly be considered as totally crazy, I would predict. But this episode took place in the 1980s, not back in Marie Curie’s or Pasteur's time, and similar events can and will happen today.

But such considerations are not a concern of the EU functionaries. They want to see the crystals, especially if you told them you would deliver them in a year or two. This is science on deliverables, as one may call it. But it has nothing to do with daily work in a laboratory. Therefore, as some have pointed out, the administrators should be educated scientist themselves, ones who have worked for a few years in a real laboratory environment. I think this would improve things quite a lot.

My proposition for EU research funding would therefore be: give young PhD investigators (after their post-doctoral training and after meeting various quality standards) no-strings-attached research support for 5 years. In this way they can demonstrate their talent and independence by doing what they like to do, as best they can. If after this time their work stands out, support is then generously extended for another 3-5 years.

After that, the tenure decision has to be made, and those not fulfilling the criteria (to be determined) will leave academia. This would give young people an excellent start-up chance – perhaps then there would be fewer people accumulating in academia who were promised promotions that might be delayed and postponed. (In many of these cases, all of a sudden these researchers are then considered "too old" and fall out of the system completely).

This generous scheme is of course risky, for some money will not be spent the best way it could have been. But on the other hand this will allow the really talented young researchers to thrive and take off for their Nobel PriZe ambitions. So, let’s simplify the granting bureaucracy by being much more generous, while trusting in peoples’ ability to self-organize to meet their challenges and perform. In the end it is not bookkeeping that will count, but the really great and innovative research results that bring humanity a step further along. Why shouldn’t we be prepared to take this risk? I am afraid, though, that my scheme would leave thousands and thousands of desktop offenders unemployed. . .

Comments (10) + TrackBacks (0) | Category: Who Discovers and Why

June 25, 2008

(No) Anarchy in the EU: A Report From Inside

Email This Entry

Posted by Derek

My post a few days ago on research in the EU, quoting a letter to Nature from Dr. Theo Wallimann in Zurich, started off a long comment thread. And now I've heard from Dr. Wallimann himself, who has a wealth of personal experience with research funding, with the EU, and with large consortia of academic groups.

He's sent along a very interesting commentary, which I'm going to post in two parts. Today is on what EU research funding is like, and tomorrow it'll be on what it could (or should) be. So here's Dr. Wallimann with a report from the field:

I did by no means intend to say that important findings can only be made by lone wolf scientists, but wanted to say that if (and I am talking here about basic science, not industrial applied science), small groups are left to work independently, with passion, in the realm of those things that interest them from the inside of their spirit and heart, then the chances of making an unexpected finding are statistically much higher - compared to a granting agency telling you what topics should be worked on in order to qualify for funding.

Once an important finding in basic science has been made, it is relatively easy to find partners and to build up from the bottom a collaborative interdisciplinary team, even up to Manhattan Project-like applications. The latter step is mostly a matter of finances, for one knows what has to be done, since the basic findings and ground-work has been done by the basic scientists.

It is fact that the EU agencies (and probably most of the research funding agencies) want to see such interdisciplinary research networks even before any novel findings have been made. They tend to focus on relevant societal problems, like cancer, obesity, climate change, etc. And this is bloody ridiculous, for this encompasses only (or mostl) those scientists who just happen to work in these areas and who may happen to be excellent or mediocre. But it excludes other groups, mostly younger ones, who may not directly work on such a topic, but whose findings may turn out to be most important for them in the future.

What I would like to stress fervently is that true science is not predictable. If you already know what you want to find out it is no longer truly innovative science: this is exactly what Albert Einstein meant (and explicitly said), and what Albert Szent-Gyorgyi, the archetype of a "Free Radical", said as well. The latter Nobelist (for Vitamin C and on muscle contraction) never received substantial research money from NIH, for he refused to write a 50 page grant proposal exactly delineating and spelling out what he wanted to do during the next 3-5 years. He said, "How can I say what I am going to do in the laboratory in 3-5 years, if I don't even know today what I shall do there tomorrow".

I have been working in an "enforced" consortium of a EU program with a total of 26 laboratories Europe-wide. The sheer size of the consortium, with all of its members focusing on different aspects of the same global question, apparently seems to have been the most convincing argument for the EU administrators The program was substantially funded and we all profited indeed from this financial support, although the administration and book-keeping and report-writing efforts were horrendous. However, as it turned out, when the members met and got acquainted and divided into sub-groups (so-called “work-packages”), one had to realize relatively quickly that one was sitting on a table with competitors who worked on the very same problems as oneself. And example would be wanting to grow crystals of an important enzyme to solve its X-ray structure and from there, to design inhibitors or activators for pharmacological intervention.

So my question now is: how are you going to communicate in such a group? Which of your secrets that would give an advantage to the competitors are you going to spell out? Which hints does your neighbor disclose to you? And so on. This fact led to some rather awkward situations where people were sort of lingering around the real questions and problems and all tried to talk about those results that had just been accepted in a publication and were to be in press very soon. So here is the situation, we were forced to officially "collaborate" by the EU program, in order to get at the EU research financial honey-pot. But once we had the money, we would rather have preferred to work independently again and not share bench data with competitors.

By contrast, if the EU would foster independent smaller groups and if one then made an important finding, they themselves could go out and look for ideal collaboration partners on the spot without any granting agency telling them what to do and whom to consider. This gives a project a real kick-off, since such partners can be specifically selected for mutual compatibility and collaboration. Certainly, they would have to be as passionate as the original about the new finding and call in some other colleagues who would complete a strong team. Finally, such self-organization leads to true potentiation, but desktop planners can definitely not enforce this, I am convinced.

I was participating yet in another consortium program that was overshadowed by its own so-called steering committee. They felt responsible for the success of this program, so they started to strongly interfere and prescribe to us what to do, out of anxiousness that something unpredictable could happen. This simply shut down any possible creative outcome for this program.

As mentioned above, if a basic science program is successful in finding something really novel and important, only then can a "Manhattan Project"-like application of the basic research lead to an applied mega-project.

Many of the commenters here seem to have a misconception about the difference between basic science versus a Manhattan Project. I hope that this helps to clarify some of these issues, and I wish that you could come to work in a basic research laboratory for at least 10 years. You could easily grasp then what I mean to say here, I think. Thanks for your consideration and patience.

Comments (13) + TrackBacks (0) | Category: Who Discovers and Why

June 19, 2008

Anarchy in the EU

Email This Entry

Posted by Derek

There’s been a lot of arguing – has been for many years – about research funding over in the EU. This is above and beyond the usual “not enough” protests, which are the way with funding of pretty much everything, pretty much everywhere, pretty much all the time.

A word on that: in my last 25 years of hearing academic researchers talk about grant money, never once have I heard that the situation is good. It’s always bad, worse, getting worse, tight, terrible, year in and year out. That’s not to say that sometimes those adjectives haven’t been accurate, but it’s hard to imagine that they’ve applied without letup. Some years ago, I realized that asking a professor about research grants is exactly like asking a farmer about rain. I did grow up on the Mississippi Delta, which actually comes in handy once in a while.

But the latest EU discussion is only partially about the amount of money involved; it’s also about how it’s to be used. There was an editorial in Nature not long ago from a fellow who wanted to make sure that it was spent wisely. “Wisely”, in his view, was to make sure that it goes to “problems society recognizes as central”, and the way to do this, naturally, was to have large research collaborations and consortia. These would presumably be put together by committees, commissions, and various far-seeing agencies staffed by the sorts of experts who spring up whenever the money starts to sprinkle down. I can just hear the Third Organization Meeting of the Steering Committee starting up right now, the chairman reminding everyone that they have a very full schedule today, please take your seats for our first speaker on "The Challenges and Opportunities of Interdisciplinary Research Management in a Multipolar World". . .

I grit my teeth when I think about this sort of thing; it's enough to make a man wish he'd gone to truck-driving school instead. So I particularly enjoyed a letter that the journal printed in response, from Theo Wallimann at the ETH in Zurich. He points out that nearly every single significant discovery in the history of science has come outside the framework of such top-down research consortia. Single researchers or small groups pursuing their own ideas have been the source of the good stuff, and half the time these breakthroughs haven’t even been what people were looking for in the first place. Says Wallimann about the big multicenter operations:

“. . . These mostly involve laboratories that have already established their name and fame, and are now often comfortably operating on well-worn tracks or working opportunistically on headline-grabbing problems or fashionable topics.

Science and innovation are chaotic, stochastic processes that cannot be governed and controlled by desk-bound planners and politicians, whatever their intentions. Good scientists are by definition anarchists,”

I can only cheer him on, because I couldn’t do a better job of summing up what I believe about science myself. In tribute, I’m going to go out to my lab and try something anarchic: an experiment that’s very interesting, but has very little chance of succeeding. If the EU really wants to tell its scientists what to do, they would be better off mandating six months of the same.

Comments (47) + TrackBacks (0) | Category: Who Discovers and Why

May 27, 2008

An Eye For the Numbers

Email This Entry

Posted by Derek

My wife and I were talking over dinner the other night – she’d seen some interview with the owner of a personal data protection service, and he made the pitch for his company by saying something about how out of (say) a million customers, only one hundred had ever reported any attempts on their credit information or the like. And my wife, who spent many years in the lab, waiting for what seemed to her to be the obvious follow-up question: How many people out of a million that didn’t subscribe to this guy’s service report such problems?

But (to her frustration) that question was never asked. We speculated about the reasons for that, partly out of interest and partly as a learning experience for our two children, who were at the table with us. We first explained to them that both of us, since we’d done a lot of scientific experiments, always wanted to see some control-group data before we made up our minds about anything – and in fact, in many cases it was impossible to make up one’s mind without it.

After a brief excursion to talk about the likely backgrounds and competencies of news readers on TV, we then went on to say that looking for a control set isn’t what you could call a universal habit of mind, although it's a useful one to have. You don’t have to have scientific training to think that way (although it sure helps), but anyone with a good eye for business and finance asks similar questions. And as we told the kids, both of us had also seen (on the flip side) particularly lousy scientists who kept charging ahead without good controls. Still, the overlap with a science and engineering background is pretty good.

What I’ve wondered, since that night is how many people, watching that same show, had the same question. That would be a reasonable way to determine how many of them have the first qualification for analyzing the data that come their way. And I’m just not sure what the percentage would be, for several reasons. For one thing, I’ve been working in the lab for years now, so such thinking is second nature to me. And for another, I’ve been surrounded for an equal number of years, by colleagues and friends who tend to have science backgrounds themselves, so it’s not like my data set is representative of the population at large.

So I’d be interested in what the readership thinks, not that the readership around here is any representative slice of the general population, either. But in your experience, how prevalent do you think that analytical frame of mind is? The attitude I’m talking about is the one that when confronted with some odd item in the news, says “Hmm, I wonder if that's true? Have I got enough information to decide?" It's an essential part of being a scientist, but if you're not. . .?

Comments (32) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

May 1, 2008

O Pioneers!

Email This Entry

Posted by Derek

Drug Discovery Today has the first part of an article on the history of the molecular modeling field, this one covering about 1960 to 1990. It’s a for-the-record document, since as time goes on it’ll be increasingly hard to unscramble all the early approaches and players. I think this is true for almost any technology; the early years are tangled indeed.

As you would imagine, the work from the 1960s and 1970s has an otherwordly feel to it, considering the hardware that was available. And that brings up another thing common to the early years of new technologies: when you look back on them from their later years, you wonder how these people could possibly have even tried to do these things.

I mean, you read about, say, Richard Cramer establishing the computer-aided drug design program at Smith, Kline and French in nineteen-flipping-seventy-one, and on one level you feel like congratulating his group for their farsightedness. But mainly you just feeling like saying “Oh, you poor people. I am so sorry.” Because from today's perspective, there is just no way that anyone could have done any meaningful molecular modeling for drug design in 1971. I mean, we have enough trouble doing it for a lot of projects in 2008.

Think about it: big ol’ IBM mainframe, with those tape drives that for many years were visual shorthand for Computer System but now look closer to steam engines and water wheels. Punch cards: riffling stacks of them, and whole mechanical devices with arrays of rods to make and troubleshoot stiff pieces of paper with holes in them. And the software – written in what, FORTRAN? If they were lucky. And written in a time when people were just starting to say, well, yes, I suppose that you could, in fact, represent attractive and repulsive molecular forces in terms that could be used by a computer program. . .hmm, let’s see about hydrogen bonds, then. . .

It gives a person the shudders. But that must be inevitable – you get the same feeling when you see an early TV set and wonder how anyone could have derived entertainment from a fuzzy four-inch-wide grey screen. Or see the earliest automobiles, which look to have been quite a bit more trouble than a horse. How do people persevere?

Well, for one thing, by knowing that they’re the first. Even if technology isn’t what you might dream of it being some day, you’re still the one out on the cutting edge, with what could be the best in the world as it is. They also do it by not being able to know just what the limits to their capabilities are, not having the benefit of decades of hindsight. The molecular modelers of the early 1970s did not, I’m sure, see themselves as tentatively exploring something that would probably be of no use for years to come. They must have thought that there was something good just waiting right there to be done with the technology they had (which was, as just mentioned, the best ever seen). They may well have been wrong about that, but who was to know until it was tried?

And all of this – the realizations that there’s something new in the world, that there are new things that can be done with it, and (later) that there’s more to it (both its possibilities and difficulties) than was first apparent – all of this comes on gradually. If it were to hit you all at once, you’d be paralyzed with indecision. But the gap in the trees turns into a trail, and then into a dirt path before you feel the gravel under your feet, speeding up before you realize that you’re driving down a huge highway that branches off to destinations you didn’t even know existed.

People are seeing their way through to some of those narrow footpaths right now, no doubt. With any luck, in another thirty years people will look back and pity them for what they didn’t and couldn’t know. But the people doing it today don’t feel worthy of pity at all – some of them probably feel as if they’re the luckiest people alive. . .

Comments (8) + TrackBacks (0) | Category: Drug Industry History | In Silico | Who Discovers and Why

January 8, 2008

Rainbows and Fishing Expeditions

Email This Entry

Posted by Derek

I came across a neat article in Nature from a group working on a new technique in neuroscience imaging. They expressed an array of four differently colored fluorescent proteins in developing neurons in vivo, and placed them so that recombination events would scramble the relative expression of the multiple transgenes as the cell population expands. That leads to what they’re calling a “brainbow”: a striking array of about a hundred different shades of fluorescent neurons, tangled into what looks like a close-up of a Seurat painting.

The good part is that the entire neuron fluoresces, not just a particular structure inside it. Being able to see all those axons opens up the possibility of tracking how the cells interact in the developing brain – where synapses form and when. That should keep everyone in this research group occupied for a good long while.

What I particularly enjoyed, though, was the attitude of the lab head, Jeff Lichtman of Harvard. He states that he doesn’t really know exactly what they’re looking for, but that this technique will allow them to just sit back and see what there is to see. That’s a scientific mode with a long history, basically good old Francis-Bacon style induction, but we don’t actually get a chance to do it as much as you’d think.

That varies by the area being under investigation. In general, the more complex and poorly understood the object of study, the more appropriate it is to sit back and take notes, rather than go in trying to prove some particular hypothesis. (Neuroscience, then, is a natural!) In a chemistry setting, though, I wouldn’t recommend setting up five thousand sulfonamide formations just to see what happens, because we already have a pretty good idea of what’ll happen. But if you’re working on new metal-catalyzed reactions, a big screen of every variety of metal complex you can find might not be such a bad idea, if you’ve got the time and material. There’s a lot that we don’t know about those things, and you could come across an interesting lead.

Some people get uncomfortable with “fishing expedition” work like this, though. In the med-chem labs, I’ve seen some fishy glances directed at people who just made a bunch of compounds in a series because no one else had made them and they just wanted to see what would happen. While I agree that you don’t want to run a whole project like that, I think that the suspicion is often misplaced, considering how many projects start from high-throughput screening. We don’t, a priori, usually have any good idea of what molecules should bind to a new drug target. Going in with an advanced hypothesis-driven approach often isn’t as productive as just saying “OK, let’s run everything we’ve got past the thing, see what sticks, and take it from there”.

But the feeling seems to be that a drug project (and its team members) should somehow outgrow the random approach as more knowledge comes in. Ideally, that would be the case. I’m not convinced, though, that enough med-chem projects generate enough detailed knowledge about what will work and what won’t to be able to do that. (There’s no percentage in beating against structural trends that you have evidence for, but trying out things that no one’s tried yet is another story). It’s true that a project has to narrow down in order to deliver a lead compound to the clinic, but getting to the narrowing-down stage doesn’t have to be (and usually isn’t) a very orderly process.

Comments (8) + TrackBacks (0) | Category: Biological News | Drug Development | The Central Nervous System | Who Discovers and Why

December 21, 2007

Winterize Your Ideas

Email This Entry

Posted by Derek

It’s time, across most of the drug industry, for people to prepare their labs for a few days off. Some companies officially close between Christmas and New Year’s. At the others, you’ll find about 20% occupancy, and those people will likely as not be taking advantage of the time to shovel stuff out of their offices. Not much drug discovery lab work gets done in the last week of December, I can tell you.

I’ve written before about how I used to leave my lab space in what I thought was good shape, only to come back after the break and find that I’d labeled flasks with helpful legends such as “Large Batch” or “2nd Run”. And every January, there I’d be, looking at some tan-colored stuff and thinking “Hmm. Second run of what, exactly?” I could usually work it out, but a couple of times over the years I’ve had to run NMR or mass spectra just to figure out what I was getting at.

So, make sure your stuff is labeled with something more intelligent, is my advice. And even more importantly, make notes to remember lines of research, and plans of what to do. It’s easy to lost the thread after being off for a while. This isn’t always bad – one of the good things about a break is that you lose the threads of a few things that are well lost. But it’s a good idea to write down what’s in progress, what you plan to do about it, and what you’re going to try to do next.

I’m convinced that a lot of good ideas get lost. They're not followed up on, they're forgotten, or they're buried under later duties. I've been trying to keep that from happening, which is one reason I was asking about literature and note-organizing software a while ago (more on that in January). One of my tasks today is making sure that all the current thoughts I have are battened down for the season. As usual, it'll probably turn out that some of the things I'm doing now would be well replaced by some of the things I've just been thinking about.

Comments (9) + TrackBacks (0) | Category: Life in the Drug Labs | Who Discovers and Why

December 3, 2007

Exciting Nonsense Wins Another One

Email This Entry

Posted by Derek

The lessons of the recent pyridinium follies are old ones. We’re going to have to relearn them again and again – doomed, if like. That’s because as scientists we’re pulled toward two opposite sorts of error when it comes to new ideas, and because in science, everything comes down to new ideas. We’ll have these problems with us always.

The first error, for which the recent retracted papers are the latest posters on top of a thick, stapled, stack, is to become too infatuated by one’s own ideas. It’s a very easy emotion to yield to. To use an unexpectedly R-rated metaphor, it’s the intellectual equivalent of sexual excitement. Under either influence, potentially dangerous decisions and courses of action can begin to seem reasonable and natural, in contrast to how they might appear in less agitated states of mind. Objections, even quite real and forceful ones, are swept aside as being trivial, fit to deal with later after the important business at hand has been concluded.

The problem is, the best scientific ideas induce this state of mind, and in proportion to their scope. I’ve been hit by a few of these, at my own level, and it’s difficult enough. Think about what goes on up in the heights! Can you imagine what it must have been like for James Clerk Maxwell to tie all of electromagnetism up into a perfectly wrapped gift box with three bows on it? Or for Watson and Crick, looking at their DNA model when they were the only two who’d seen it? That intense joy of discovery, of being right, causes people to behave in strange ways. But it’s one of the driving engines of science and always will be.

By the standards of the great discoveries, these latest cases are trivial – as is most work by most scientists, and all of mine, I hasten to add. But the same principles apply. You look at these things and think “Why didn’t they look into known pyridinium chemistry more? Spend some extra time in the library? Some of those salts are surely crystalline – why didn’t they get an X-ray structure as soon as possible?” All perfectly good questions, from outside, and in retrospect. But any of us could end up brushing aside similarly good questions about our own work, and we shouldn’t forget it.

Now for the other error. The excitement of a new idea has a flip side: the depressed (and depressing) feeling that it must have been done before. Surely this can’t be as good as it seems, otherwise it would be known, right? Most new ideas die. Actually, punishingly near all the new ideas in science die, and most of them die quickly. This spectacle horrifies and numbs many scientists, especially if they have sensitive or fearful natures, and causes them to keep their heads down. No breakthrough, no cry.

If you stay in this mindset long enough, the problem takes care of itself: you’ll train yourself to no longer have many new ideas at all, and you need not face the prospect of watching what happens to them. Unusual, potentially interesting things may happen to your experiments, but you won’t be fooled: into the red waste can they’ll go, along with all the other stuff that didn’t give you what you wanted. Nobel prizes have been poured into red waste cans.

Transportation metaphors are safer than copulatory ones. Discovery, then, is a road with ditches on both sides of it, and the hard part is steering between them. Too much optimism and you go whooping off after junk – or worse, catching it and publishing it after writing your name all over it. Too much pessimism, though, and you never accomplish anything at all. I’ve got mud from both sides of the road on my lab coat – how about you?

Comments (12) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why

November 15, 2007

And Speaking of Discovering Things. . .

Email This Entry

Posted by Derek

After extolling the joys of finding things out in the post directly below, I couldn't resist linking to this story for those who haven't seen it. Now, this guy is really out there on the edge, and I wish him well with his theory (available here on Arxiv for the mathematically inclined). What I especially like is that he's ready to make some testable predictions.

You know, when Feynman met Dirac, the first thing he mentioned to him was how wonderful it must have been to discover the equation that bears his name. If Garrett Lisi's theory can predict particles out of thin air the way Dirac called the positron, he'll be remembered the same way. Good luck to him, and to those like him.

Comments (7) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why

Maybe Not Improved, But Definitely New

Email This Entry

Posted by Derek

My lab and I have plans to start experimenting with several compound classes that we’ve never handled before. In fact, for some of these, no one’s handled them before. Some of these are not only novel as in patentable, for which fairly small changes can suffice, but novel as in what-the-heck-is-that. I couldn’t be happier.

Honestly, I have no idea of what I’d do with a job where I knew what was going to happen next. Years of science have ruined me for a lot of other occupations. I was putting some of these up on the board the other day, and mentioning what I’d like to try. “Do you know if you can do that?” someone asked, and I answered that no, I didn’t, and as far as I could tell, no one else did, either. I can draw out a bunch of reasonable-looking reactions, but the structures themselves may well have other ideas.

The first time I realized that I was in new territory, although to a much lesser degree, was back in my first year of graduate school. My first few reactions generated things that were already known in the group, naturally, and then I made some model systems that were already known in the literature. But pretty soon I remember making a compound that I realized just flat-out wasn’t in Chemical Abstracts, because no one had ever had the need to make it before. (As far as I know, no one’s had any need to make the stuff again, either – if someone has, I hope they got more use out of it than I did!) But there it was, in a flask: something that had never existed before.

My list of such compounds is now rather lengthy. In the drug industry, naturally, we spend just about all our time making compounds that haven’t existed before. (If they’ve been exemplified somewhere, you can forget about a patent on the chemical matter itself). Our livelihoods depend on cranking out thousands upon thousands of compounds that no one else has made. I haven’t seen the figures, but I’d guess that a large fraction of the new small organic molecules that get registered every year in Chemical Abstracts are from pharma. Those patents with the three-hundred-page experimental sections do start to add up.

This latest stuff, though, goes a few steps beyond that, to whole compound classes that no one’s touched yet. I may well find that there’s a whole set of very solid reasons why these things haven’t appeared in the literature – perhaps these reasonable reactions of mine have been tried in recent years, but found only to produce more of that gooey dark stuff in the bottom of the flask. We shall see. I’ve certainly made my share of that material.

But I doubt that all of them are in that category. So with any luck, soon I’ll be making something no one’s ever made, and finding things out about it that no one’s ever discovered. And as I said, I couldn’t be happier about that.

Comments (9) + TrackBacks (0) | Category: Life in the Drug Labs | Who Discovers and Why

November 9, 2007

One Year

Email This Entry

Posted by Derek

I was reminded yesterday that today is the one-year anniversary of the day that we found out that the Wonder Drug Factory was being closed down. I remember that presentation rather well. I was one of the more optimistic ones, thinking until the last that we had about a 50/50 chance of the ax, but by the time the meeting began everyone had heard what was really coming.

Unpleasant, that was, and it did extend a cloud over the following holiday season. The job-searching period that followed wasn't anything I'm looking to relive, either, although my severance pay kept it from being anywhere near as bad as it could have been. And in the end, things worked out well. I thought they would, but as my wife pointed out to me at the time, I generally think that things will work out well, so that isn't as good an indicator as it might otherwise be.

But the whole thing was a useful reminder: no one's sitting back in a comfortable chair in this industry. You're riding a wild animal, instead. Working at a smaller company makes it easier to remember that, as many people here around the Boston/Cambridge area know, but there's no drug company so large or so profitable that it can make any guarantees to anyone. Patents expire, companies get taken over, drugs drop out of clinical trials or get pulled off the market.

But on the flip side, discoveries get made. Things make it through trials even though no one thought they might. New ideas get tried out, and given how little we know, just about anything has a chance of improving our lot in research. That's the thing about science: we don't have to be stuck where we are; we can invent doors and walk out of them into something new.

Comments (4) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

October 16, 2007

Three Things You Need

Email This Entry

Posted by Derek

Scientists who’ve spent a lot of time in research labs will have noticed that self-confidence seems to pay big dividends. If you think back to the people you’ve worked with who got the most things to work (especially the difficult things), you’ll likely also recall them as people who set up experiments relatively fearlessly, with expectations of success. Meanwhile, the tentative, gosh-I’m-just-not-sure folks generally compiled lesser records.

You can draw several conclusions from these observations, but not all of them are correct. For example, a first-order explanation would be to assume that experiments can sense the degree of confidence with which they’re approached and can adjust their outcomes accordingly. And sometimes I’m tempted to believe it. I’ve seen a few ring systems that seem to have been able to sense weakness or fear, that’s for sure, and other molecular frameworks that appear to have been possessed by malign spirits which were just waiting for the right moments to pounce.

But besides being nuts, this explanation is complicated by the few (but statistically significant) number of confident fools who thing everything they touch will work, no matter how ridiculous. These people tend to wash out of the field, for obvious reasons, but there’s a constant trickle of them coming in, so you’re never without a few of them. If self-assurance were all you needed, though, they’d be a lot more successful than they are.

No, I think that confidence is necessary, but not sufficient. Brains and skilled hands are big factors, too – but they aren’t sufficient by themselves, either. You need all three. Most people have them in varying quantities, of course, but you can learn a lot by looking at the extreme cases.

For example, I’ve seen some meticulous experimenters, not fools, who were undone in the lab by their lack of the confidence leg of the tripod. Their tentative natures led them to set up endless tiny series of test reactions, careful inch-at-a-time extensions into the unknown. This sort of style will yield results, although not as quickly as onlookers would like, but will probably never yield anything large or startling. Still, you can hold down a job with this combination, which is more than I can see for the next category.

Those are the confident fools mentioned earlier, who lack the brains part of the triad. They get involved in no-hope reactions (and whole no-hope lines of research) because they lack the intelligence to see the fix they’ve gotten themselves into. The whole time they essay, with reasonable technical competence and all kinds of high hopes, experiments which are doomed. As I said above, these people don’t necessarily have such long careers, but in the worst cases they can pull others of similar bent in their wake (while their more perspicacious co-workers leave, if possible, when they catch on to what’s happening).

Then there are the folks who lack the skilled hands. “Lab heads,” I can hear a chorus of voices say. “These are the people who become lab heads and PhD advisors.” There’s a lot of truth to that. Plenty of people can have good, bold ideas, but be incapable of physically carrying them out at the bench. Even controlling for age and lack of experience, there are plenty of Nobel-caliber people you wouldn’t want near your lab bench. Some of them are out of practice, but many of them were just as destructive when they were younger, too. Surrounded with good technicians, though, they can do great things. Many just face facts and confine themselves to the blackboard and the computer screen.

But if you have reasonable amounts of all three qualities, you’re set up to do well. Confidence is perhaps the limiting reagent in most natures, which is why it stands out so much when it’s combined with the others. A scientist with a lot of nerve is more likely to discover something big, and more likely to recognize it when it comes, than someone who undervalues their own abilities. They’re more prone to setting up weird and difficult experiments, knowing that the chances of success aren’t high, but that sometimes these things actually come through. That’s probably the source of the correlation that I lead off this post with: it’s not that confidence makes these ideas work. Rather, if you don’t have it you probably don’t try many such things in the first place.

Comments (19) + TrackBacks (0) | Category: Who Discovers and Why

October 2, 2007

Why Now, And Not Before?

Email This Entry

Posted by Derek

Talking as I was the other day about flow chemistry makes me think again of a topic that I find interesting, perhaps because it’s so difficult to refute anything: the counterfactual history of science. It’s a bit perverse of me, because one of the things I like about the hard sciences is how arguments can actually be settled (well, at least until new data come along that upset everything equally).

But here goes: if flow chemistry does catch on to become a widely accepted technique – and it may well deserve to – then what will have taken it so long? None of the equipment being used, as far as I can see, would have kept this all from happening twenty-five years ago or more. Some pumps, some tubing, a valve or two, and you’re off, at least for the simplest cases. Some guy at Pfizer published a home-made rig that many people could assemble from parts sitting around their labs. So why didn’t they?

Easier to answer is why flow chemistry didn’t become the default mode of organic synthesis. The requirement for pumps and pressure fittings made it unlikely to be taken up back in the days before HPLC systems were so common. Something could have been rigged up even a hundred years ago, but it would have been quite an undertaking, and unlikely to have caught on compared to the ease of running a batch of stuff in a flask.

But since the 1970s, the necessary equipment has been sitting around all over the place, so we get back to the question of why it’s finally such a hot topic here ins 2007. (And a hot topic it surely is: the other day, Novartis announced that they’re handing MIT (just down the road from me) a whole bucket of money to work out technology for process-sized flow reactors).

My guess is that some of it has been the feeling, among anyone who had such ideas years ago, that surely someone just have tried this stuff out at some point. That’s an inhibitory effect on all sorts of inventions, the feeling that there must be a reason why no one’s done it before. That’s not a thought to be dismissed – I mean, sometimes there is a good reason – but it’s not a thought that should make the decision for you.

There’s also the possibility that some of the people who might have thought about the idea didn’t see it to its best advantage. The ability to have high temperatures and pressures in a comparatively small part of the apparatus is a real help, but if you’re thinking mostly of room-temperature stuff you might not appreciate that. Ditto for the idea of solid-supported reagents (which, in its general non-flow form, is another idea that took a lot longer to get going than you might have thought).

And there’s always the fear of looking ridiculous. Never underestimate that one. Microwave reactions, remember, got the same reception at first, and that must have gone double for the first people who home-brewed the apparatus: “You’re running your coupling reaction in a what?” I can imagine the rolling eyes if some grad student had had the flow chemistry idea back in the 1980s and starting sticking together discarded HPLC equipment and hot plates to run their reactions in. . .

Comments (7) + TrackBacks (0) | Category: Who Discovers and Why

July 16, 2007

European Drugs, American Drugs

Email This Entry

Posted by Derek

I don't know how many people here in the US have noticed, but the European Community is getting worried about how well its member countries are doing in drug research. Their Pharmaceutical Forum group has met twice so far, trying to recommend changes in drug pricing, rewards for innovation, information transfer to patients, and other areas.

I'll let one of the co-chairmen, Guenter Verheugen, explain the problem:

". . .The time has passed that Europe was the pharmacy of the world. True, our industry still has an inherent strength. But we are losing competitive ground to the United States and, increasingly, to China, India, Singapore and others. There are many worrying signals. Let me mention just two:

First, the widening gap in pharmaceutical research: Over the last 15 years investment in pharmaceutical R&D has been growing in the US significantly and consistently faster than in Europe.

Second, the development of key medicines: In the past, Europe was leading in developing the most successful breakthrough pharmaceuticals. This trend has reversed. In 2004, two thirds of the 30 top selling medicines in the world were developed in the USA."

All of the things the group is looking at seem worthwhile. But I wonder how many of them will do anything to actually change that trend? Phrases like "fair reward for innovation" and "alternative pricing and reimbursement mechanisms" point to one that might. These seem to be carefully worded calls to let the drug companies make a bit more money, in the hopes that they might find it worthwhile to make some more drugs.

That's bound to help. It's true that the United States market is where the money is made in this business, and it can't be a coincidence that this is where a lot of the innovation is coming from. But you can always develop a drug in Europe and sell it in the US, right? No, I think that there are other factors at work, cultural ones that no high-level multinational task force is going to pin down.

Perhaps I think this way because I used to work for a European company, and now work in Cambridge (home of a zillion startups). But I've long thought that there's a different attitude to research and development in this country, a greater willingness to try odd ideas and to put money behind them. I'm not saying that you don't find innovation in Europe, because you certainly can. But I think that innovators have, on the average, an easier time getting funded and being taken seriously over here. It's not a huge difference, but it's a steady one, and it's been compounding over time.

Comments (21) + TrackBacks (0) | Category: Business and Markets | Drug Prices | Who Discovers and Why

May 20, 2007

Little, Big

Email This Entry

Posted by Derek

I became entangled in a discussion - well, OK, argument - in another blog's comment section a few years ago, and the person I was having it out with said something that stopped my fingers cold right on the keyboard. I'm not sure any more how this came up - probably something Platonist about whether physical laws were discovered, or invented - but the comment was made that well, of course these things had to be invented, because "there's only people, and what they do".

I felt as if I had encountered alien life. It would be hard to find a statement further away from what I believe, and I'm pretty sure that being a scientist has something to do with that. I mentioned in that Nature blog interview that when I was a boy I used to spend a lot of time with the microscope and telescope: well, through both of those instruments you can observe a lot of things that have nothing to do with humans at all. It's a useful perspective, and how my sparring partner could have missed experiencing it, I just can't imagine.

I mention this because I've had both instruments out recently. The past few days I've been showing a lot of microscopic life forms to my kids (and using the same microscope I used 35 years ago to do it - it's an old, rock-solid Bausch & Lomb). The rotifers and Vorticella look exactly the way they did when I was ten years old; they've been at it all this time with no help from me. (And does that stream across the road have a lot of Synura in it, or what?)

And at night, I've been taking advantage of some clear skies and lack of moonlight over the last week or ten days, hunting through the shoals of spring galaxies in Virgo and Coma Berenices. They look the same way they did when I was ten, too; nothing less likely to be disturbed by human activity comes to my mind just now.

All this is one of the things I like about science. So much of what we see and study is indifferent to human concerns. In the chemistry lab, I can (and will) do what I can to get a reaction to work, but in the end, the molecules are going to do what they do and they're not going to consult me. That goes at least double for later in the game when compounds are tried out in mice and rats. The moving finger (moving paw?) writes at that point, and there's nothing you can do about it if the experiment was the correct one, done correctly.

All this takes you down a peg, which isn't a bad thing. People get rather inflated views of themselves, their ideas, and the importance of both. M100 and the rotifers don't care. (And to show this whole post can be served up in light verse, let me recommend "Canopus"!)

Comments (16) + TrackBacks (0) | Category: Who Discovers and Why

April 17, 2007

The Doctorate and Its Discontents

Email This Entry

Posted by Derek

The doctorate-or-not discussion is roaring along in the comments to the last post, and they're well worth reading. I have a few more thoughts on the subject myself, but I'm going to turn off comments to this post and ask people to continue to add to the previous ones.

One thing that seems clear to a lot of people is that too many chemists get PhD degrees. I'm not talking about the effect of this on the job market (more on that in a bit) so much as its effect on what a PhD is supposed to represent. So, here's my take on what a PhD scientist is supposed to be, and what it actually is in the real world. I'm going to be speaking from an industrial perspective here, rather than an academic one, although many of the points are the same.

Ideally, someone with a doctorate in chemistry is supposed to be able to do competent independent research, with enough discipline, motivation, and creativity to see such projects through. In an industrial applied-research setting, a PhD may initiate fewer projects strictly from their own ideas, but they should (1) always be on the lookout for the chance to do so, (2) be willing and able to when the opportunity arises, and (3) add substantial value even to those projects that they themselves didn't start.

That value is both creative and managerial - they're supposed to provide ideas and insights, and they're supposed to be able to use and build on those of others. They should be able to converse productively with their colleagues from other disciplines, which means both understanding what they're talking about and being able to communicate their own issues to them. Many of these qualities are shared with higher-performing associate researchers, who will typically have a more limited scope of action but can (and should) be creative in their own areas. Every research program is full of problems, and every scientist involved should take on the ones appropriate to their abilities.

So much for the ideal. In reality, many PhD degrees are (as a comment to the previous post said) a reward for perseverence. If you hang around most chemistry departments long enough as a graduate student, you will eventually be given a PhD and moved out the door. I've seen this happen in front of my eyes, and I've seen (and worked with) some of the end results of the system. The quality of the people that emerge is highly variable, consistent with the variation in the quality of the departments and the professors. Unfortunately, it's also consistent with the quality of the students. But it shouldn't be. The range of that variable shouldn't be as wide as it is.

There are huge numbers of chemistry PhDs who really don't meet the qualifications of the degree. Everyone with any experience in the field knows this, from personal observation. You will, I think, find proportionally more of these people coming out of the lower-quality departments, but a degree from a big-name one is still far from a guarantee. The lesser PhD candidates should have been encouraged to go forth and get a Master's, or simply to go forth and do something else with their lives. They aren't, though. They're turned loose on the job market, where many of them gradually and painfully find that they've been swindled.

Over time, the lowest end of the PhD cohort tends to wash out of the field entirely. There are, to be sure, many holders of doctoral degrees in chemistry who go into other areas because of their own interests and abilities. But there are also many jobs that make an outside observer wonder why someone with a PhD is doing them, and that's where many people end up who shouldn't have a doctorate in the first place. Others, somewhat more competent, hold on to positions because they're able to do enough to survive in them, if no more. While there are plenty of bad or irrelevent reasons for people not to be promoted over the years, some cases aren't so hard to figure out.

Those, then, are my thoughts on the doctoral degree. What can be done about this situation, if anything, will be the subject of a future post. I have another set of opinions on the Master's degree and its holders, which I'll unburden myself of a bit later on. Comments, as mentioned, should go into the discussion here.

Comments (0) + TrackBacks (0) | Category: Academia (vs. Industry) | Graduate School | Life in the Drug Labs | Who Discovers and Why

April 13, 2007

Deep Breaths

Email This Entry

Posted by Derek

I've been out of the research labs for over two months now, and you know what I miss the most? No, not the safety meetings (hah!) or the smell of the solvents - what I miss is getting fresh data on experiments. Waiting for results on something crucial is hard to take, but it's also exciting, and there's nothing I've found outside of science that compares.

I've sat at my desk holding a warm printout from an LC/MS, or with a newly arrived e-mail from the biologists, and I swear, I've closed my eyes for a moment before I've looked at them. That's the last moment of not knowing; after that you're living in the new world that the experiment made. I don't know what I'd do with a job that didn't have that feeling in it, and honestly, that's one reason I'm still looking.

It occurs at all sorts of levels - checking the NMR to see if your reaction worked or not, waiting for the PK results to see if your idea raised the blood levels, holding your breath when the compound goes into two-week tox testing. And beyond that things get really terrifying, when human data start coming in from the clinic.

Ask Vertex. I wrote here about their antiviral compound (telaprevir, VX-950) for hepatitis. It's a huge market that really needs a better drug, and a lot of people have taken swings at it. Well, on Saturday night in Barcelona, the company is presenting their latest clinical data, and investors are checking their heart rates. The drug's success would be the biggest event in the history of the company (and a huge advance in hepatitis therapy), and failure (the antiviral norm, unfortunately) would be very, very hard to take.

The company's top clinicians already know the answer, of course, because a person's got to have time to make slides. They've had the experience I was talking about, on a scale that few people have ever felt. You click a button, turn a page, and the future writes itself out there in front of you. . .

Comments (10) + TrackBacks (0) | Category: Clinical Trials | Infectious Diseases | Who Discovers and Why

April 2, 2007

Failure: Not Your Friend, But Definitely Your Companion

Email This Entry

Posted by Derek

Here's something that you don't see discussed very often, but it's worth some thought: what kind of personality do you need to have to do drug discovery research? Clearly, any conclusions are going to carry over well to other fields, but drug work has some peculiarities that can't be ignored.

The most obvious one is that the huge, horrible, overwhelming majority of projects never lead to a marketed drug. Many readers will have seen the sobering statistics of 85 to 95% failure rates in the clinic, but (bad as that is) it doesn't get across the number of times that projects get nowhere near the clinic at all. Take it from the top: the majority of targets that are screened for chemical matter don't turn up anything useful (it's not even close). The majority of the ones that do still die on their way to clinical trials. And then a solid 90% of those don't make it to market.

So, if you define yourself as a success by whether or not you've put something on a pharmacy shelf, you've set a very high bar, one that many people in basic research don't reach. It's different for people further down the line, where the field has already narrowed. But if you're working on early med-chem, for example, you're likely to go years between realistic shots at a drug you can claim part of the credit for.

That'll vary by your company's culture, too. Some companies bang out projects like a sawmill spitting out boards - or try to, anyway - while others carefully take their time for years and years. There's no certain advantage to either method, as far as I can see (else the companies doing the best one would have taken over by now and driven other modes out of existence). But you'll certainly have more shots on goal at the first type of company, which might keep your spirits up. Of course, the fact that you're largely going to be getting more chances to fail in the clinic might just depress them again, so you have to take that into account.

It'll also vary by therapeutic area. Central nervous system projects are going to run slower than oncology ones, by and large. In cancer, the clinical goals are comparatively clear, and where the disease is often (and most terribly) progressing at such a pace to give you solid numbers in a reasonably short period. Contrast that to Alzheimer's disease, for example, whose ruinous clinical trials could take years to tell you anything useful. Cancer will also give you more shots per compound, since a drug that does zilch for pancreatic cancer (and most do just that) might be useful in the lung or liver. While what we call cancer is several hundred diseases, what we call Alzheimer's might only be one. Depression and schizophrenia are clearly more complicated and split up, but (as opposed to cancer), there's no easy way to tell how many types there are or what particular one a patient might be presenting with, so the clinical work is correspondingly more difficult.

So, this is the pharmaceutical world you're going to have to live in. If you take each drug project personally, as an indicator of your own worth, you're probably not going to make it. You'll be beaten down by the numbers. As an antidote, a bit of realistic fatalism is helpful, although too much of it will shade into ah-that'll-never-work cynicism, which is the ditch on the other side of the road from prideful optimism. I'd recommend learning to enjoy the upside surprises, and to not be surprised by the failures (while still looking them over to see if there's something you can avoid next time around). You really have to draw a line between the things you can affect through your own talent and hard work, and the things you can't. Most of the crucial stuff is in the second category. A sense of humor about your own abilities and limitations will serve you well. But that goes for a lot of other jobs besides the drug business, doesn't it?

Comments (15) + TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | The Central Nervous System | Who Discovers and Why

March 19, 2007

Scientists, All Over

Email This Entry

Posted by Derek

Missed a day or two there - my apologies to the readership. I was out of town, up in Boston/Cambridge (and just in time for a fine March snowstorm). Every time I'm up there, I remember the first time I visited Cambridge, some years ago. I was walking along near Kendall Square and I began to notice that the visible number of thoughtful-looking bearded guys wearing oxford-cloth button-down shirts with the sleeves rolled up and khaki trousers (that is, to a first approximation looking exactly like me) had reached a level unknown to my experience.

It was a strange moment. I'm used to feeling a bit removed from my environment, so to suddenly blend in so thoroughly was a bit of a shock. The same feeling has hit me in just a few other places, mostly around large well-respected universities, and there have been a few isolated incidents elsewhere. I still recall talking with some other chemists at a conference once when one of the group made what is still the only casual conversational reference to Kurt Goedel I've ever encountered (well, other than the ones I've made myself, and I don't trot 'em out too often, for a lot of good reasons).

It can even happen at a distance. When my wife and I were watching the coverage of the first Mars Rover landing, broadcast from JPL in Pasadena, the way the people there talked about the project and the looks on their faces as they awaited word of a successful landing made me realize again that I am in fact part of a tribe, and that these people were members of it, too. We're scattered all over the world, but we know each other when we meet.

Comments (14) + TrackBacks (0) | Category: Who Discovers and Why

February 26, 2007

Hedgehogs in Stockholm

Email This Entry

Posted by Derek

F. Albert Cotton's recent demise brings up a question that traditionally comes up in the fall, during Nobel season. Cotton himself never won the prize, although his name came up constantly in the list of contenders. There's a group of scientists (a select one) in every Nobel-bearing discipline that fills this role. Some of these people eventually get Nobel recognition, of course, and when that happens a good number of onlookers are relieved that ol' So-and-So finally got it, and another host are surprised, because they'd already sort of assumed that ol' So-and-So had received one years before.

But as time goes on, it seems to become clear that some eminent people are just not going to win, and I'd have to have put Cotton in that category. The Nobel committee had years in which to act on his behalf; they never did. The question then is why. Theories abound, some of them conspiratorial (and thus unprovable for another hundred years or so), but most trying to discern what makes some work Nobelish and some not.

One of the strongest arguments is that doing a lot of good work across several areas can hurt your chances. It seems to help the committee settle on candidates when there's a clear accomplishment in a relatively well-defined field to point at. Generalists and cross-functional types are surely at a disadvantage, unless they can adduce a Nobel-worthy accomplishment (or nearly) in one of their areas. That's not easy, given how rare work at that level gets done even when you've devoted all your time and efforts to one thing.

The current example in organic chemistry is George Whitesides at Harvard. He's an excellent chemist, and has had a lot of good ideas and a lot of interesting work come out of his group. But it's all over the place, which is something I really enjoy seeing, but the Nobel folks maybe not as much. Just look at this bio page from Harvard, and watch it attempt to pull all his various research activities under some sort of canopy. It isn't easy.

To drag the late Isaiah Berlin into it again, Whitesides clearly seems to be a fox rather than a hedgehog. Hedgehogs tend to be either spectacularly wrong or spectacularly right, and that last category smooths the path to greater formal recognition. For more on fox/hedgehog distinctions in other disciplines, see Daniel Drezner (international relations), Andrew Gelman (statistics), and Freeman Dyson (physics), and for an application of the concept to drug research, see here. Which sort of creature does Whitesides stock his research group with? Paul Bracher would know.

(Readers are invited in the comments to submit their own candidates for scientists who always seem to be on the Nobel list, but haven't won, and any alternate theories about why this happens).

Comments (23) + TrackBacks (0) | Category: Current Events | Who Discovers and Why

February 22, 2007

Inspirational Reading?

Email This Entry

Posted by Derek

An undergraduate reader sends along this request:

I was wondering if you had some recommended readings for a second year student, eg books that you have read and made a palpable impression on you when you were my age.

That's a good question, despite the beard-lengthening qualification of "when you were my age". The books that I would recommend aren't the sort that would require course material that a sophomore hasn't had yet, but rather take a wider view. I would recommend Francis Crick's What Mad Pursuit, for one. It's both a memoir of getting into research, and a set of recommendations on how to do it. Crick came from a not-very-promising background, and it's interesting to see how he ended up where he did.

Another author I'd recommend is Freeman Dyson. His essay collections such as Disturbing the Universe and Infinite in All Directions are well-stocked with good writing and good reading on the subject of science and how it's conducted. Dyson is a rare combination: a sensible, grounded visionary.

Another author to seek out is the late Peter Medawar, whose Advice to a Young Scientist is just the sort of thing. Pluto's Republic is also very good. He was a fine writer, whose style occasionally comes close to being too elegant for its own good, but it's nice to read a scientific Nobel prize winner who suffers from such problems.

I've often mentioned Robert Root-Bernstein's Discovering, an odd book about where scientific creativity comes from and whether it can be learned. I think the decision to write the book as a series of conversations between several unconvincing fictional characters comes close to making it unreadable in the normal sense, but the last chapter, summarizing various laws and recommendations for breakthrough discovery, is a wonderful resource.

Those are some of the ones that cover broad scientific topics. There are others that are more narrowly focused, which should be the topic of another post. And I'd also like to do a follow-up on books with no real scientific connection, but which are good additions to one's mental furniture. I have several in mind, but in all of these categories I'd like to throw the question open to the readership as well. I'll try to collect things into some reference posts when the dust eventually clears.

Comments (26) + TrackBacks (0) | Category: Book Recommendations | General Scientific News | Who Discovers and Why

August 6, 2006

Where Do They Come From?

Email This Entry

Posted by Derek

I've been re-reading the late Francis Crick's "What Mad Pursuit", as I do every so often, and something stuck me about his career. Crick came out of nowhere. He was a disgruntled physicist with no particular training in biology, and as far as I can tell, no one (outside of a small circle of co-workers) had ever heard of him while he did the work that won him a Nobel Prize. But he was not only instrumental in working out the DNA structure with James Watson, but he then went on to do a tremendous amount of work on the genetic code and RNA, an accomplishment easily worth a second Nobel. (This part of his career is the subject of a new book by Matt Ridley, just out last month, which I plan to read at the first opportunity).

Now, clearly, the man was in the right place at the right time. But (as he himself pointed out), the key was that he realized that while it was happening. There have been any number of scientists perfectly placed to make great discoveries who failed to realize the importance of what they were (or should be) doing. An equal number have had some idea of what the stakes were, but got bogged down in one sort of mistake or another and never reached the heights they could have.

Crick seems to have had a gift for recognizing important problems that had a chance of being solved. His advice for finding these and working through them seems to me to be extremely sound. Among his recommendations are to not put too much faith in your own negative hypotheses (reasons why your ideas won't work), to not be too quick to use Occam's Razor in biology (since evolution doesn't necessarily favor simple and beautiful solutions, just ones that work), and to not fall in love with a particular model or theory to the point where you care more for it (for its own sake) than whether it's really true.

So, I can't help but wonder: how many more unrecognized Francis Cricks are out there? How many will we ever hear of? Will a person of this level always find a way to be known, or was Thomas Gray right? "Full many a flower is born to blush unseen, and waste its sweetness on the desert air". I hope not, but I fear so.

Comments (20) + TrackBacks (0) | Category: Who Discovers and Why

July 4, 2006

Now With the Great Taste of Fish!

Email This Entry

Posted by Derek

When I was in grad school, I tested out some new-fangled separatory funnel idea that some small company was trying to launch. I can't locate a picture of one of the things, but it had a sort of piston/reservoir arrangement at the bottom, which let you draw the lower layer down and pour off the upper one. I tried it out some, and didn't find it any more convenient or effective than the good ol' standard model.

Even if it had been, so what? How much better could it have been? I'm not sure how much improvement there is to be had in the classic sep funnel design. Those things haven't been around for a century or two for nothing. What led this inventor to think that the world was waiting for him to fix this nonexistent problem, I don't know. And that's something that everyone who's trying to invent something needs to keep in mind: even if your brainchild works, will anyone want it?

I think that some innovative types miss this in their drive to get all the kinks out of their latest invention. It's easy to misdirect this sort of energy, particularly when all that hard work can be employed to keep you from dwelling on such disturbing questions. I'm not suggesting that people sell themselves and their ideas short - just that they think them through as much as they can. If you're not attacking a real problem, it's likely that no one is going to be interested.

If someone tried to sell me on a wonderful new gizmo to, say, spot my TLC plates for me, I don't think I'd be jumping up and down to try it out. A glass capillary, home-made or store-bought, works just fine, and I rarely have any cause to complain. And besides, it only takes a second or two. How much time and irritation could a new device save? On the other hand, I would be very interested in a fivefold-faster rota-vap, should some bright person figure out how to make one. Moving up the scale, if you have something that will allow me to predict (really predict) oral absorption for a new drug candidate, then you can name your own price.

But no one's offering me either one of those, as far as I know. So in the interim, remember: not everything new is improved, and not everything is improved enough to be worthwhile.

Comments (15) + TrackBacks (0) | Category: Who Discovers and Why

May 29, 2006

Ask Not

Email This Entry

Posted by Derek

I was struck by a point that came up in the comments to the last post, about how since discovery organizations are going to have a certain percentage of failures, why not use that as a measurement of whether or not they're doing their job? Perhaps there should be a "failure quota" - if too many things have worked, perhaps it's because you're playing things too safe.

It's an intriguing idea, but I can see a few potential problems. For one thing, you'd need to be able to distinguish between playing it too safe and being really good (or really lucky). For another, there are quite a few organizations that are spending all their time trying to play it as safe as possible. If your research budget is running a bit lean because you don't have that many good products out there, then you may not feel like taking many extra risks. In that situation, the whole phrase "too many things have worked" just doesn't even parse.

It would be useful, though, for drug discovery organizations of any type to be a bit more realistic about how many of their efforts are going to fail. I mean, everyone knows the statistics, but everyone pretends that it's not going to be their own project that goes down. This is wishful thinking. Clearly, most of the time it is going to be your own project, because most project don't make it.

This isn't a license to give up. We should still do whatever we can think of to keep it from happening to our projects. But we shouldn't be amazed when our best efforts fail.

Comments (9) + TrackBacks (0) | Category: Who Discovers and Why

May 25, 2006

Too Big to Discover Anything

Email This Entry

Posted by Derek

Raymond Firestone is a retired medicinal chemist with a long and distinguished career, most recently at Bristol-Meyers Squibb. He's never been very shy about speaking his mind, in person or in print, and it's nice to see that time has not mellowed him. A colleague, under the e-mail title of "Ray Firestone being Ray Firestone" pointed out a letter from him in a recent issue of Nature, in which he responds to the idea that the Bayer-Schering deal (and others like it) are necessary for innovation:

My experience, during 50 years' research in big pharma, is the opposite. Large companies are always inefficient because their command structure makes them so. Any organization with many layers, where power flows from the top down, works against innovation look at the widely reported depletion of big-budget companies' pipelines.

The reason is that people in the middle layers, who neither control events nor engage in discovery, are too afraid to respond favorably to genuinely new ideas. If they encourage one and then it flops, as most innovations do, they are marked for demotion or dismissal. But if they kill novel programs, no one will ever know that a great thing died before it was born, and they are safe. . .Nowadays most of the innovation takes place in small outfits, because it is not crushed there.

I can't say that he doesn't have a point, because I've seen just what he's talking about. But the flip side, which unfortunately isn't as common, is that some large organizations have been able to innovate because they're big enough not to mind a little failure here and there. And large organizations provide more places for people (and projects) to hide for a while, which is occasionally beneficial.

Anyway, if anyone has Firestone's e-mail address, feel free to send him to this recent post, which should make him feel right at home.

Comments (16) + TrackBacks (0) | Category: Who Discovers and Why

April 23, 2006

You Can't Win If You Don't Play

Email This Entry

Posted by Derek

I enjoyed one of the recent comments to the "Why All the Gloom" post, where an IP lawyer mentions what people at the small startups told him: namely, that managers had figured out that by saying "No" they were right all the time, while saying "Yes" had a much lower chance of success.

I know just what he's talking about. You can have an entire career in the drug industry, just sitting around telling people that their ideas aren't going to work. And more than nine times out of ten, you'll be right. Fortunetellers and stockpickers should have such a record! So what's the problem?

Well, the problem is, the whole industry depends on those times when someone's idea actually works. For that to happen, chances have to be taken, risks run. Being in charge of reluctantly-killing-off-once-promising-projects has a lot more job security, but someone has to go and make something happen once in a while.

One problem is, I think, that some companies kill things off for a long period until the situation gets more and more desperate. Then they try to run with whatever's in the clinic at the time. Some of those projects will, no doubt, be worse bets than some of the things that were killed off through excess caution a few years before. But if you didn't let a few of those loose, you eventually get stuck with what you have.

That's what's so nerve-wracking about doing pharma research. It's like playing tournament poker: the blind bets keep going up, so if you don't get out there and play some of your hands, you'll be eaten alive. If you convince yourself that none of your cards are worth anything, you're going to have a short night of it. I say, take Francis Crick's advice: don't believe in your own negative arguments so much. Recognize that every experiment, every program, every drug has plenty of reasons why it shouldn't work. Be aware of them, sure - but be aware that everything successful once had the same questions buzzing around it, too. Something has to work - right?

Comments (16) + TrackBacks (0) | Category: Who Discovers and Why

March 9, 2006

Men and Women and Science and Jobs

Email This Entry

Posted by Derek

That Greenspun piece that set off so much comment around here was ostensibly addressed to the position of women in science, but didn't have much specific to say on the topic. So I thought I'd mention another article, by Peter Lawrence of Cambridge in PLoS Biology, that deals with the subject more directly.

We're heading into the territory that got Larry Summers in so much trouble at Harvard, but here goes. The first part of Lawrence's argument is that it's silly to assume that men and women are interchangeable. As it happens, I agree with him:

Some have a dream that, one fine day, there will be equal numbers of men and women in all jobs, including those in scientific research. But I think this dream is Utopian; it assumes that if all doors were opened and all discrimination ended, the different sexes would be professionally indistinguishable. The dream is sustained by a cult of political correctness that ignores the facts of life-and thrives only because the human mind likes to bury experience as it builds beliefs. Here I will argue, as others have many times before, that men and women are born different.

By this point, some people usually will have already stomped out of the room. But wait - that word "different" has to be peeled away from the words "better" and "worse". Allow me a chem-geek analogy: lithium and sodium, though similar compared to most other elements, are still clearly different from each other. Which one is better, which worse? The question makes no sense, but that's exactly where many arguments about men and women come to a fiery halt.

Lawrence's second point, drawn from the work of Simon Baron-Cohen, is that it's also silly to assume that men and women naturally all fall into their alleged types. Even if there are indeed typical male and typical female ways of approaching the world, these are still only averages that we take from a whole spectrum of behavior. That doesn't mean they aren't real, but we should appreciate that they're on a continuum, and that the two distributions of men and women take up a good amount of space, with room to overlap:

. . .Baron-Cohen presents evidence that males on average are biologically predisposed to systemise, to analyse, and to be more forgetful of others, while females on average are innately designed to empathise, to communicate, and to care for others. Males tend to think narrowly and obsess, while females think broadly, taking into account balancing arguments. Classifying individuals in general terms, he concludes that among men, about 60% have a male brain, 20% have a balanced brain, and 20% have a female brain. Women show the inverse figures, with some 60% having a female brain."

Lawrence goes on to summarize Baron-Cohen's theory that autism represents an extreme male brain, while noting that a sprinkling of mild autism-spectrum behavior probably does science (and society) some good:

It will not have escaped the notice of many scientists that some of their colleagues and maybe themselves have more than a hint of these "autistic" features. . .Indeed, we might acknowledge that a limited amount of autistic behaviour can be useful to researchers and to society-for example, a lifetime's concentration on a family of beetles with more than 100,000 species may seem weird, but we need several such people in the world for each family. And most of these specialists will be men. . .

It follows that if we search objectively for an obsessive knowledge, for a mastery of abstruse facts, or for mechanical understanding, we will select many more men than women. And if males on average are constitutionally better suited to be this kind of scientist, it seems silly to aim at strict gender parity.

However, in professions that rely on an ability to put oneself in another's place, at which women on average are far superior, we should expect and want a majority of women.

Still, he goes on to say that we would do well to find a place for each type in the other's favorite professions, since the fields are complex enough for some different sorts to be needed. In science, to pick one obvious example, people with better interpersonal skills would make better mentors for students and younger scientists. Too many potentially good careers are ruined by research advisors with no personal skills - well, no helpful ones, anyway. (I've known some who had amazing talents of enraging and antagonizing people).

And on top of all this, there's no evidence that creativity and original thinking, which are in perpetually short supply, have any male-brain female-brain bias at all. But as Lawrence points out, the techniques that we use to fill positions, in both industry and academia, are biased toward male-brain behavior: unshakeable self-confidence, quick recall of all sorts of data, self-promotion in the form of long publication lists, and so on. We would do better, he says, to give less weight to "salesmanship and pushiness".

I think he's got a good point. For example, it occurred to me fairly soon after coming to industry that most of the people who climbed the ladder in a company did so by devoting all their time to climbing the ladder. Anyone with a range of interests and activities, not all of them necessarily relevant to gaining power and position, was at a disadvantage and would generally lose out to the people who had made getting those things their life's work. Of course, in this way you end up promoting some people into supervisory positions whose main skills have nothing to do with being able to usefully supervise anyone, but that's a well-known problem, too.

Turning Lawrence's recommendation into something workable isn't going to be easy, though. Originality and creativity are famously hard to measure, or often even to recognize. Some people seem to have an eye for talent (I'm thinking of some historical examples from the arts world), but if it exists, it's a rare quality. And in many cases, we're going to be asking less creative people to evaluate more creative ones, which has been a traditional recipe for disaster.

Comments (14) + TrackBacks (0) | Category: Who Discovers and Why

February 19, 2006

Because I Never Lie, and I'm Always Right

Email This Entry

Posted by Derek

Something recently made me think back to an undergraduate physics lab that I once had to do. This was elementary optics, so we had the standard collection of lenses on a beaten-up optical bench as we did our Newtonian thing. There would be little reason for me to remember it if it hadn't been for the comment of one of my lab partners.

We were setting up another phase of the experiment, and the instructions said for us to put the lenses in a set configuration and see if we got such-and-such effect. "That can't be right", this guy said, moving them to a different spot that he thought would work better. They didn't, and we ended up doing it the way the lab manual had laid out. But I've returned to that scene several times over the last twenty-five years, trying to figure out what bothered me about his response.

After all, a good researcher shouldn't just take someone else's word for everything, right? And if you have a hypothesis, and can test it, you should go ahead and do it, right? On the face of it, my old partner's attitude towards our lab that day shouldn't have gotten on my nerves, but it did. There was something wrong about it, but I kept trying to work out what it was - in a way that didn't put me on the just-follow-the-lab-book side of the argument, where I didn't want to be.

It finally dawned on me. My problem with the guy wasn't that he didn't trust the lab manual. It was that he trusted himself way too much. It would have been one thing to try what was in the book, then say "I wonder what happens if you move this lens out here?" That would actually be a good sign. But the statement "That can't be right"isn't one, especially not from an undergrad doing an optical demonstration whose results have been known for three hundred years.

Now, of course, I have a lot more room to maneuver as a scientist. Most of the experiments I run are things that no one has ever done before, not on these particular molecules in this particular way. I'm pretty sure I know what's going to happen, but I get surprised a lot. And when it comes to the effect of my compounds on cells and animals, I get surprised all the time.

But it's surprisingly easy to forget how little I know. After sixteen-plus years doing this, I have to watch my tendency to talk to younger colleagues as if I know what's going to happen with their ideas. I don't. I have my experience to draw on, of course, which makes me say things like "Are you sure you want to put a napthyl in that molecule?" or "Cyclohexyl groups are a metabolism magnet - that's going to get torn up". I'd say that a good solid majority of the time, those two statements are correct. But once in a while they're not, and most of the time I don't have as much evidence to back up my prejudices as I do with those two examples.

So now I know why I've never forgotten the guy who said "That can't be right". I've been trying, all this time, to keep from turning into him. The struggle continues.

Comments (5) + TrackBacks (0) | Category: Who Discovers and Why

January 18, 2006

A Scientific Aptitude Test?

Email This Entry

Posted by Derek

I've just spent some time reading a very enjoyable and interesting paper (PDF, and thanks to Tyler Cowen at Marginal Revolution) from Shane Frederick at MIT's Sloan management school. He has a simple test that seems rather well correlated with a person's appetite for financial risk-taking and their ability to postpone a smaller immediate reward for a larger one in the future.

Frederick's test consists of what are basically trick questions. They're the sort of things that have an immediately obvious intuitive answer, but one which is (unfortunately wrong). These take a bit of mathematical and logical thinking to work out, but nothing advanced. You do have to be able to not just run with the first answer that comes in your head, though. (Not doing so, of course, requires you to be the sort of person who double-checks things, preferably from a different angle, before committing to them). Cognitive ability and patience have been linked before, both in the popular imagination and in a few studies that Frederick cites.

Update: there's a possibly confounding variable that I forgot to mention: perhaps the better students taking this test had already been exposed to these types of questions before as brain teasers. I know that this was the case with me; I recognized them as classic forms of not-the-obvious-answer questions. This gets back to the question of how much a person's test-taking performance is due to practice in taking tests. . .

He gave this test to a number of different groups, and his table of results is worth the download right there (I'll give you a hint - if you didn't know already, MIT is quite different from the University of Toledo). But as it turns out, the people who score very low and the ones who score very high on this sort of quiz also answer quite a few other questions differently. Frederick was checking their responses to choices such as "Would you rather have $100 now or $140 next year?" and "Would you rather have $100 for sure or a 75% chance at $200?". What he found was the high-scoring group heavily prefers to wait for the larger payout in the first question, and heavily prefers to take the risk in the second one. (The whole paper details a range of these, of varying levels of risk balanced with immediate or long-term attractiveness).

Another effect that's been noted in past studies is that people are much more willing to take a risk to avoid a loss, rather than taking an equivalent risk when there's a prospect of an equivalent gain. Update: Although this was what I had in my head, I originally had this backwards; thanks to Shane Frederick for pointing this out! His low-scoring cohort is hugely biased toward this mode of thought, Frederick finds, but the effect actually disappears in the high-scoring group. (He also confirms the results of several other studies, including the finding that women tend to be much more risk-averse than men in such situations).

I couldn't help thinking that his high-scoring group is also the group that makes the best scientists. Think about it: not going with the first thing that pops into your head, but always stopping to ask yourself if it's true or not. Checking it in different ways to see if you get the same answer. These are the habits of mind that a good researcher has - and I can tell you from personal experience that some of the least competent chemists and biologists I've known come from the opposite category.

You know the ones - the folks who get an n-of-one number in an assay and go running around telling everyone that they've found something wonderful, only to have to eat the whole thing (again) when it doesn't repeat. The set-up-the-reaction-first (and look in the library later) folks who have to pour even more reactions into the waste can than the rest of us do. Professor Frederick should run his next survey in the science hallways - perhaps we could separate the sheep from the goats.

Comments (6) + TrackBacks (0) | Category: Who Discovers and Why

October 10, 2005

Time and Chance

Email This Entry

Posted by Derek

A recent comment here says:

Scientific progress, ie medical breakthroughs, are just as likely to come through dumb luck or chance as from having the most briliant mind thinking about them. Its about having larger numbers of scientists working, rather than having larger numbers of "smart" people working. In some respects, it might be better to have more people who are not all that careful, ie, more accidents = more progress."

I know what this person is trying to say, but I think that this is only about half right. I'd be the last person to minimize the role of chance in scientific discovery. It's not something that everyone likes to talk about, or sometimes even admit to themselves, but it's true. People get ideas from all sorts of places. If you didn't pick up the journal article that you did one day, or talk to the right colleague, or just look out the window at the right time, you might not have had the ideas come to you that later on looked so inevitable.

But that said, bringing in more people to have accidents is a little like washing your car to make it rain. The problem is, you need accidents and you need people who know what they're looking at when they happen. The kinds of people who slop around the lab the most are, sad to say, often the ones who don't realize when something big is happening right in front of them. What you'd want is to find some Alexander Flemings, people who are meticulously messy. Pasteur was absolutely right about fortune favoring the prepared mind.

Comments (9) + TrackBacks (0) | Category: Who Discovers and Why

October 9, 2005

Needs and Wants

Email This Entry

Posted by Derek

Last week's question about whether the best people are going into this line of work brings up quite a few other related topics. For example, what motivates people to do research in the first place?

I've seen that question answered in a lot of different ways. At one end of the scale, I've had colleagues whose main motivation was Not To Fail. You see more lab assistants with that mentality than lab heads, but it's not unknown at any level. People in this category duck when they see something tricky coming their way, because those things have too high a chance of failure. They'd much rather be on a grind-it-out part of the project, cranking away on a bunch of analogs that everyone already knows have some activity. One step above that, they'd rather be on a project that everyone thinks will be a success.

I live the opposite stereotype. I'd much rather be on a project that people don't think has a good chance of working, because then you get a chance to be a hero. If it fails, hey, that's what people thought it would do anyway. (And as for those can't-miss projects, no thanks. They miss just about as often as everything else, and someone might need to be blamed for it). The best way to motivate my species is to come in and say "You know, nobody thinks that this can be done. Want to prove them wrong?" I'm not (necessarily) making a claim of superiority for this mindset. You don't want a department top-heavy with either type.

Then there are people whose motivations are outside the scientific realm. Most of those folks want to move up the ladder. If being a good scientist is the way to do that, they're willing to give it a try. If laughing at all the boss's jokes works better, that's fine, too. Whatever it takes. One thing about these people - they tend to stay focused. Someone with scientific interests can cause trouble by flitting from topic to topic as their fancy takes them, but a person who wants a promotion more than anything sticks to that task. The trick is to make the needs of the research organization match up reasonably well with what a person like this needs to have to advance. Then, everyone's happy. When those agendas start to diverge, you summon trouble.

Comments (3) + TrackBacks (0) | Category: Who Discovers and Why

September 15, 2005

Pretty Much the Reason You'd Think

Email This Entry

Posted by Derek

I wrote the other day about having a hypothesis in mind when you make new drug analogs (as opposed to just trying a few to see what happens.) A colleague of mine and I were talking about this, and he offered a suggestion about why some people are much more "by the book" than others when it comes to running an analoging program. It's all, he said, about covering your anatomy.

And I think he's right. If you confine yourself to just the sorts of structures that people have had success with before, no one can question your selection. And it's also true that people who insist on a hypothesis behind every new analog also have a strong inclination toward the hypotheses that are already thought to be important (or at least fashionable.) If upper management gets convinced that the smell of your compounds is a key component for clinical success, you can expect these folks to make sure that everything gets made with the sniff test in mind.

So if you're one of these people and your project runs into the ditch - most do, you know - you're still going to be OK. You did everything the way that everyone else thinks that it should be done, and you have concrete reasons to point to if anyone asks you why a particular compound was made. It's an insurance policy.

But if you have some weird ones in there, though, or tried some things that aren't in the usual playbook, the tendency will be to blame those for anything that went wrong. Well, actually, people will blame the person who thought up or OKed these things in the first place - you. I'm not arguing that projects switch over to complete wild-blue-yonder mode, not any more than I think a pot of soup needs a pound of pepper in it. But for seasoning, I think that a drug discovery program needs a small (but real) effort to make some things just because no one's made them. These are the kinds of ideas that go on to breed new projects themselves.

I once came across an analogous effect in the financial world. An article I read about the money management industry pointed out that many fund managers tend to cluster together for safety. If everyone's going up and down roughly in tandem, there's not as much room for irate customers who demand to know why you aren't keeping up. If you buy a million dollars of IBM and it goes down, the author pointed out, people will ask "What's wrong with that IBM?" But if you buy a millions dollars worth of Zarkotronics and it tanks, people will ask "What's wrong with you?"

Comments (19) + TrackBacks (0) | Category: Who Discovers and Why

September 6, 2005

Crossing Your Fingers, Authoritatively

Email This Entry

Posted by Derek

I recall a project earlier in my career where we'd all been beating on the same molecular series for quite a while. Many regions of the molecule had been explored, and my urge was often to leave the reservation. I put some time into extending the areas we knew about, but I wanted to go off and make something that didn't look like anything that we'd done before.

Which I did sometimes, and then I'd often get asked: "Why did you make that compound?" My answer was simply "Because no one had ever messed with that area before, and I wanted to see what would happen." Reactions to that approach varied. Some folks found that a perfectly reasonable answer, sufficient by itself. Others didn't care for it much. "You have to have a hypothesis in mind," they'd say. "Are you trying to improve the pharmacokinetics? Fix a metabolic problem? Pick up a binding interaction that you think is out there in the XYZ loop of the protein? You can't just. . .make stuff."

I respected the people in that first group a lot more than I did the ones in the second. I thought then, and think now, that you can just go make stuff. In fact, you not only can, but you should. You probably don't want to spend all your time doing that, but if you never do it at all, you're going to miss the best surprises.

I take issue with the idea that there has to be a specific hypothesis behind every compound. That supposes amounts of knowledge that we just don't have. Most of the time, we don't know why our PK is acting weird, and we're not sure about the metabolic fate of the compounds. And we sure don't know their binding mode well enough to sit at our desks and talk about what amino acids in the protein backbone we're reaching out for. (OK, if you've got half a dozen X-ray structures of your ligands bound in the active site of your target, you have a much better idea. But if your next compound breaks new structural ground, off you may well go into a different binding mode, and half your presuppositions will go, too.)

I like to think that I've come to realize just how ignorant I am in issues of drug discovery. (In case you have any doubt, I'm very ignorant indeed.) But I still hear people confidently sizing up new analog ideas on the blackboard, though: No, that one won't bind well in the Whoozat region. Doesn't have the right spacing. And that one should be able to reach out to that hydrophobic pocket we all know about. Let's make that one first. (These folks are talking without X-ray structures in hand, mind you.)

Well, if it makes you feel better, then go ahead, I suppose. But this kind of thing is one tiny step up from lucky rabbit feet, for which there is still a market.

Comments (4) + TrackBacks (0) | Category: In Silico | Who Discovers and Why

June 1, 2005

As Thin As a Soap Bubble

Email This Entry

Posted by Derek

Words of wisdom from Jane Galt over at Asymmetrical Information:

"The appalling povery of Sri Lanka or Mozambique is not some bizarre aberration that can be tracked to a cause we can cure. We are the aberration; Sri Lanka and Mozambique are the normal state of human history."

Very sad, and very true. I've often had the same thought with respect to my work as a scientist. This is the only time in all of human history that I could have done some of the things that I've done. The human race has had capable and expanding technology for only a short time compared to the millennia spent hacking our living from the ground and running for our lives. The average snapshot of a person my age, taken any time over the last couple of hundred thousand years, has been of someone nervously gnawing on a bone while the wind howls around their shelter of rocks and branches. Well, that would be the scene only if I'm not already over the average male lifespan over that period, which I may well be.

No, my situation (and yours, too, if you're reading this at all) is a crazy outlier out on the right-hand edge of the curve: a nice climate-controlled roof over my head, a recent meal and no worries about the next one, no fear of wild animals or bands of club-wielding scavengers, no smallpox or polio to carry me off. And instead of grunting out a subsistence living, I get to sit in a well-appointed room and get paid for thinking up new ideas and trying them out with rare and expensive equipment.

Francis Bacon had it right: our trade is "the effecting of all things possible." We should never forget to enjoy it as much as possible, and do everything we can to keep it alive.

Comments (6) + TrackBacks (0) | Category: Who Discovers and Why

May 17, 2005

Very Wrong, or Very Right

Email This Entry

Posted by Derek

It's not completely fair of me to make fun of the old hype about rational drug design, because every moment has its overhyped technology. (Perhaps, as we've speculated around here before, today's candidate is RNA interference. . .) All of it ends up sounding silly in the end.

And the arrogant tone that the proponents of some new systems often take sounds laughable, too, after things don't work out. But that same attitude is probably needed, up to a point. You really have to have some nerve to remake a scientific field. After all, at the very least you're saying to everyone that there's something important that they don't know about yet. And sometimes, the message is a flat "You people have this stuff completely wrong, so step back and let me show you why." It's not a job for the meek.

People with shy and fearful personalities will almost never make a great discoveries in the first place, much less publicize them effectively. That kind of thinking will cripple you with all the reasons why things won't work, why someone else (surely smarter and more competent!) would have already tried this, and so on. And even world-beating ideas tend to fail a lot before they finally get going, so the timid or easily discouraged will be convinced that they're wrong before they ever get a chance to be right.

I'm not saying that all the great discoverers are intolerable, although some of them sure are. But even if they're good to the people around them, they're mighty hard on nature and on their experiments, and harder still on the existing order.

Comments (6) + TrackBacks (0) | Category: Who Discovers and Why

April 3, 2005

Don't Talk To Yourself So Much

Email This Entry

Posted by Derek

I've been re-reading Francis Crick's memoir What Mad Pursuit, and this passage struck me:

". . .it is important not to believe too strongly in one's own arguments. This particularly applies to negative arguments, arguments that suggest that a particular approach should certainly not be tried since it is bound to fail. . .While one should certainly try to think which lines are worth pursuing and which are not, it is wise to be very cautious about one's own arguments, especially when the subject is an important one, since then the cost of missing a useful approach is high. . .

Be sensible but don't be too impressed by negative arguments. If at all possible, try it and see what turns up. Theorists almost always dislike this sort of approach."

Right on target. In my field, there is hardly an experiment worth doing that can't be objected to right at the start. Counterexamples abound, theoretical reasons why things won't work out are everywhere. Too sterically hindered, not nucleophilic enough, an interfering functional group somewhere else in the molecule, wrong solvent, wrong catalyst, wrong temperature, wrong everything. If you listen to every one of these objections, even when they're coming from inside your head, you'll never do anything at all. True, you'll never be wrong, but only at the cost of never being right.

This is on my mind tonight, because I'm getting close to a revival of a series of experiments that I've been messing around with for nearly three years now. It's a very interesting idea whose details, painfully, I'm not at liberty to lay out. Not yet. I'm reposting my writings on this work over in the Birth of an Idea category at the right, in case you're interested in seeing what scientific excitement does to a person.

The whole time, I've hardly had the tiniest bit of experimental success, it pains me to say. But I'm back with another variation. Every time I'm more sure that things are going to work. Perhaps, after two years of being quite wrong, I might make the switch to quite right. . .

Comments (2) + TrackBacks (0) | Category: Birth of an Idea | Who Discovers and Why

March 10, 2005

Progress Through Craziness

Email This Entry

Posted by Derek

This weekend there was an interesting article in the International Herald Tribune by James Kanter and Carter Dougherty, on pharmaceutical research in Europe versus the US. I've written on this topic myself, pointing out that most European companies, when they're expanding at all, are doing so in the US rather than in their own countries.

Having pharmaceutical sales in the US is essential, since we still have the least-regulated pricing of any major market. This, for better or worse, is where you come to make up for the price controls in Europe, and the article points out that the European governments like to talk about being world leaders in innovation while simultaneously clamping down on the rewards for it. But why would you need to do the research here as well? Kanter and Dougherty:

"Although the knowledge created by pharmaceutical research eventually spreads across borders, companies have learned that it pays to start in America. Setting up there gives companies with new products a substantial home market, without having to recruit a multilingual, European sales team, or to navigate the patchwork quilt of national rules on marketing and pricing in Europe. Being close to important U.S. medical professionals and other opinion makers from an early stage also helps smooth clinical trials. The United States produces, by volume, far more new drugs than Europe because U.S. research spending exceeds that in Europe by roughly 50 percent, said (Charles) Beever, of Booz Allen Hamilton."

And if you're starting up a small pharma or biotech company, it's easier to do it over here:

"In Europe, where capital is harder to raise and investors less willing to go out on a limb, the story can be getting financing at all. Investors sank $114 billion into U.S.-based companies that pioneered novel drugs over the past decade. Companies clustered around European biotechnology hubs, like Cambridge, England, and Uppsala, Sweden, garnered barely a quarter of that amount over the same period, although some industry leaders have raised substantial funds. The lack of a single European stock exchange and the persistence of a risk-averse investment culture have played a critical role in America's ability to steal a march on Europe, said Sam Fazeli, biotechnology industry analyst at Nomura International in London.

"Unfortunately in Europe, we are only just coming to terms with the fact that drug failures are part and parcel of the life of a biotech company," he said."

Hey, if we want to get technical about it, failure itself is part and parcel of doing any kind of research. And that brings up another reason I think that R&D tends to thrive more over here: by the standards of many Europeans, Americans are not completely sane. I've worked in Europe and with many colleagues from France, Germany, and Italy, and I really believe this. (Some of them have told me as much after they got to know me well.)

In the US, we tend to give chances to wilder ideas than in many other countries, and there's less stigma attached to their failure. That's a little-appreciated feature of scientific progress: it depends on the willingness to look like an idiot. Keep in mind, most of the paradigm-breaking new schemes that people dream up just don't work. You have to be ready to risk your time, your effort, your money and your reputation to get anything big to happen, and that (for many reasons) is just plain easier to do here.

Comments (11) + TrackBacks (0) | Category: Who Discovers and Why

March 7, 2005

The Next Science

Email This Entry

Posted by Derek

Blogging time is sparse tonight, since I'm (finally) starting off Tax Season here at Rapacious Pharma Manor. But I wanted to point people to a longish post by William Tozier at Notional Slurry, on how he became a scientist - and on what sort of scientist he found himself becoming, and what to do about it. His section, midway through, of snapshots of his graduate school days made me shudder with recognition:

But instead I learned over the next few years that I’m merely a bad molecular biologist, botanical or otherwise. In practice, at least. In theory, I rocked. I’ve been thinking about it a lot lately (for reasons I hope will become clear nearer the end of this vast wordy spume), but for brevity let me portray this period as a series of portentous snapshots:

* In a lab notebook I stumbled across the other day is a photograph of the results from my thirteenth (13th) extraction of plastid DNA from Hosta sieboldiana. I can’t recall right now whether the gel is blank, or a flame-shaped smear of random molecular fragments of size ranging from eensy to weensy. Doesn’t matter. Neither result is good, though both appear with about equal frequency in each of the 13 attempts I made over 8 months. I think on this one we decided “Restriction enzyme buffers too old; need to make new ones. Use Frank’s as control?" . . .

Comments (2) + TrackBacks (0) | Category: Who Discovers and Why

January 11, 2005

Right In Front of You

Email This Entry

Posted by Derek

Regular reader Qetzal pointed out in a comment to the "More Fun With DNA" post that a lot of neat discoveries seem - after you've heard about them - to be something that you could have thought up yourself. I know what he means. I've had that same "Yeah. . .that would work, wouldn't it. . ." feeling several times.

There's an even higher degree of the same thing, thinking that surely that new discovery has already been done. Hasn't it? Didn't I read that somewhere a year or so ago? I'm trying to remember the British literary/political figure who said it, but the quote was that the most important thing he had learned at Cambridge was not to be afraid of the obvious. I think that a lot of us are, and it's not to our benefit.

So there's a useful New Year's resolution, if anyone has room for a spare one. Shut that voice up once in a while, the one that shows up in your head when you have a wild idea, the one that says that if this were really as good as it sounds, someone would already have done it. A lot of really great stuff hasn't been done, and if too many people listen to the lesser side of their natures, it won't be.

Comments (4) + TrackBacks (0) | Category: Who Discovers and Why

August 2, 2004

Research, The Right Way

Email This Entry

Posted by Derek

For today, instead of reading something over here, I'd like to send everyone over to Australian physicist Michael Nielsen. He's been writing a manifesto about how to do research, and here's the finished product. (Thanks to Chad Orzel for the link.)

I find his prespective to be very accurate indeed. Readers may recognize some themes that I've sounded over here from time to time. I'll be add my own comments in a future post or two.

Comments (1) + TrackBacks (0) | Category: Who Discovers and Why

March 21, 2004

The Root of All Results?

Email This Entry

Posted by Derek

Mentioning well-heeled research establishments that don't produce results brings up an interesting question: is there a negative correlation between funding and productivity?

You might think so, given the example cited in the previous post, and given the cases cited in Robert S. Root-Bernstein's Discovering. There have been many great scientific feats performed with what seemed like substandard equipment for the time. But does that imply causality, or does it mean that a first-rate scientist is capable of great work even under poor conditions? (A special case, perhaps, is Alexander Fleming. One time in his later years, he was being given a tour of a more up-to-date research site, and someone exclaimed "Just think of what you might have discovered here!" Fleming looked around at the gleaming work surfaces and said "Well, not penicillin, anyway.")

I'm not arguing for poverty. I think that a certain minimum level of funding is necessary for good science - below that and you spend too much time in grunt work, the equivalent of digging ditches with kitchen spoons and mowing the lawn with scissors. But once past that, I don't think the correlation of budget and results is all that good. There's perhaps a broad trend, but nothing you'd want to stake your career on.

That said, note that there are many ways to spend huge amounts of research money. You can lavish all sorts of new facilities and state-of-the-art equipment on people, or you can spend equal amounts by running a larger effort and trying to run many more projects at the same time. The people in the first case will live in a rich environment, while those in the second can feel rather deprived. Overall budgets aren't necessarily a good indicator.

I'd argue that you want people to feel reasonably comfortable, but not luxurious. If you have to scramble a bit for resources, you end up being more, well, resourceful. I'm not talking about redistilling your wash acetone (that comes under the spoon and scissors heading.) But if you have an idea which would require, say, a completely new hundred-thousand-dollar piece of equipment, you might be able to think your way out of that if it would be hard to get. While, on the other hand, if you just have to wave your hand and the stuff appears, you might get in the habit of not thinking things through.

Comments (0) | Category: Who Discovers and Why

January 28, 2004

The Best Bad News He Ever Had

Email This Entry

Posted by Derek

The January 22 issue of Nature has a fine essay by Freeman Dyson (a hero of mine, I should add) about a fateful meeting he had with Enrico Fermi back in 1953. This was back when Dyson was a professor at Cornell, studying both the weak and strong nuclear forces.

There was a fine theoretical framework for the weak force (quantum electrodynamics, and a fine one it remains to this day.) But the strong force was giving people fits. Fermi was leading a team that did the first accurate measurements of the scattering of mesons by protons, the best data available on what the strong force was like. And after showing that QED did an excellent job on the weak force, Dyson had put his theoretical group to work in this trickier area.

After what he describes as "heroic efforts" (recall that these were the days long before any meaningful computing capacity), Dyson's team had a set of graphs of what the meson-proton interactions should look like, and they weren't too far off of Fermi's experimental data. So Dyson excitedly set up a meeting with Fermi in Chicago, showed him the graphs, and I'll let him take the story from there:

. . .he delivered his verdict in a quiet, even voice: "There are two ways of doing calculations in theoretical physics", he said. "One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither."

. . ."To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics." In desperation, I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. "How many arbitrary parameters did you use for your calculations?" I thought for a moment about our cut-off procedures and said "Four." He said "I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." With that, the conversation was over.

Dyson points out that, in hindsight, Fermi was absolutely correct. The theory they were trying to use could not possibly have done the job, not least because no one had a good idea of what protons were like (Gell-Mann hadn't come up with the concept of quarks.) Fermi, of course, was dead before quarks had ever been postulated, but he could tell that the existing framework was inadequate. And he saved Dyson years of what would have almost certainly been wasted time.

This is a perfect example of one of Weinberg's "Golden Lessons" that I spoke about on January 20th (below.) If you're working on a problem that no one (yet) has the power to solve, you can spend all your creativity in vain. Think of what would have happened if, say, Isaac Newton had stumbled across radioactivity. What could he have made of it? What are the odds that he would have been even close to correct? (And keep in mind, I'm saying these things about one of the greatest natural talents that science has ever known - Newton was downright terrifying.) A mark of a really great scientist, which Fermi certainly was, is to have a better eye for what problems are both significant and soluble. That's a small territory to work in sometimes.

In drug research, we work against a backdrop of doubts like this. Extraordinary new things are learned about living systems every year, and every time I find myself pitying all the people who were working on the same problem years ago. They may have suspected, but couldn't have know, what was really happening. Years from now, other scientists will pity us in turn. All the more reason to celebrate, when we actually get something to work!

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

January 21, 2004

. . .Your Huddled Pharmas Yearning to Breath Free

Email This Entry

Posted by Derek

Genetic Engineering News reprints parts of a speech given by Rolf Krebs, chairman of the German drug firm Boehringer Ingleheim, at a recent conference in Hannover. Dr. Krebs was speaking on the differences in pharma research between Europe and America, and he didn't leave much bottled up:

"The framework conditions for the pharmaceutical industry in Europe and the U.S. could not be more different. . .Germany serves as a good example of these changes. Last year, the "pharmacy of the world," as Germany used to be known, reported its first-ever negative export-import ratio. In the meantime, the government has realized that of the seven pharmaceutical firms previously doing R&D work in Germany, only four remain. . .it is alarming that only ten of the world's 185 industry research centers are now located in Germany. This is symptomatic of the situation throughout Europe, where 20 research centers have been closed down over the past six years."

All true. It's been clear for some time now that pharma and biotech companies are not expanding in Europe - the American ones aren't going there, and the European ones are coming here. After going over the well-known problems the industry has been having with fewer good drug targets and more complicated drug development, Krebs says, correctly that this situation raises the risks for all existing project. The higher financial stakes make late-stage project decisions very important, and those decisions are based (naturally) on expected economic return. And where do you go for that return?

"It is becoming increasingly evident that the success of the major pharmaceutical companies depends on the extent of their U.S. presence. . .There are many reasons why the U.S. market has acquired greater importance compared with the European market. Although growth has slowed more slightly over the past two years, the U.S. market is still expanding much more dynamically than the European market."

And what brings on such differences? The answer "money" is always an appropriate first guess in these situations, but there are several factors at work:

"In contrast to the high degree of freedom that exists in the U.S., all the individual European markets are subject to state regimentation. This applies both to pricing, as well as to the recognition and reimbursement of pharmaceutical products by the statutory health insurance funds. This lack of regulatory intervention in the U.S. has had a number of positive effects for the pharmaceutical industry. Prices are based on therapeutic quality; products are subject to a cost-benefit analysis all along the line (including patients.)"

This is a bit idealized, but compared to the European situation, he has a point. Krebs goes on to contrast the fundamentally different attitudes that have led to this situation:

"In comparison with Americans, we Europeans are less innovative and, above all, less inclined to take risks. We take comfort in the argument that it takes longer to create the conditions for new technologies due to our democratic processes. But democracy as such is not the root cause of these regrettable delays, as evidenced in the U.S. and the U.K., both of which are democratic states. The unwillingness to innovate and take risks results directly in an overabundance of rules and regulations. We want to lay down the final result without gaining the necessary experience first."

As he goes on to say, it's harder to get venture capital in Europe, and if you do manage to get a new idea off the ground, you run into a mass of inconsistent regulation. Hack your way through that, and your reward is compulsory government pricing. "The message is clear," he concludes, "it is an advantage not to invest in innovation."

Having observed both the American and European pharma industries at close range, I think there's a lot of truth in what Krebs says. It's not that the European companies haven't done good work, just that they could have done still better in a different environment. Being intelligent and capable, they've come to understand their problem. And finding themselves with nowhere to go in their home countries, the French, German, and Swiss firms make their cutbacks at home - but not here. Or if they're in better shape they break ground on their new research centers - not in Basel, but in Cambridge. How much of a company's R&D has to take place outside Europe before they're not a European company any more?

Comments (0) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

January 20, 2004

Weinberg's "Golden Lessons"

Email This Entry

Posted by Derek

Nobel laureate Steven Weinberg had a piece back in the Nov. 27 Nature (p. 389) offering advice to people just starting their scientific careers. It's useful stuff, and the lessons aren't just for beginners, either.

His first of "Four Golden Lessons" is No one knows everything, and you don't have to. (This came from his early paralysis at not knowing the whole field of physics.) That one gets more true every year, as the pile of scientific knowledge increases. I'm a reasonably good organic chemist, but there are big swaths of the literature that I'm not familiar with. Unusual steroid reactions? They're legion, but I don't know 'em. Sesquiterpenoid biosynthesis? (Half my readership just said "Gesundheit!") They're wild-looking compounds (PDF), but I've never needed to know much about how they're made. Mechanistic organometallic complex chemistry? Go ask Greg Hlatky (and while you're there, check out his piece on the compounds these things can make by reacting with stopcock grease).

No, I don't know these things too well, and (so far) I haven't had to. If I need to, I'll learn them. That's the best way to deal with the size of a scientific field, I think: get the fundamentals down, and that will give you the tools to learn what you need to know. Then let your own curiosity and your circumstances take you from there.

Weinberg's second lesson is aim for rough water. Try to work in a field where things are contentious and unsettled - "go for the messes." There's still room for creativity there, as opposed to the more worked-out fields. Of course, that presupposes that the reader is interested in doing creative work, but advice like this is wasted on anyone who isn't. Not that there aren't plenty of such people around - any research department is full of them. They can make contributions, as long as both they and their supervisors know the score. Trouble ensues when they don't.

His third lesson is forgive yourself for wasting time. He classifies that as "probably the hardest to take" of his lessons. This is a consequence of the go-for-the-messes advice, and will be most applicable to those that have followed it. What he means is that it can be very hard to know if you're working on something that's even solvable, or if you're working on the right problem at all.

That certainly applies to my area of research, where there are long stretches where it seems like nothing's happening, and perhaps never will. Even when things are moving along, you're never sure if that light off in the distance is reflecting off a pile of gold, or is a bare bulb put there to scare the rats off a garbage pit. So, we're searching for an agonist of receptor XYZ - who knows if such a thing exists? Or if we're going to stumble across it? Or if it'll do what we think it will, assuming we know how to test it in the correct way and assuming that we can understand the results? Working like this, there's going to be a lot of wasted effort, and you're just going to have to come to terms with it.

Weinberg's final lesson is learn something about the history of science. The least important reason to do that, he says, is that it might help out your research. To use his example, without knowing the historical record, you might come to believe that Thomas Kuhn or Karl Popper really understood how science works. But the larger reason is that an appreciation for history puts your work in perspective. Weinberg believes, as I do, that science is one of the highest activities of civilization, and that we should be proud of our parts in it. A real discovery can live longer, and with greater impact, than almost any other human work.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

January 12, 2003

Easy Parts and Hard Parts

Email This Entry

Posted by Derek

I've been reading George Dyson's interesting history of Project Orion, the late-1950s attempt to design a spacecraft powered by sequential nuclear explosions. (A borderline crazy idea, it very likely would have worked. The big question became whether it should be allowed to work at all.)

He quotes his father, Freeman Dyson, about the early days of the project:

"Everybody did a little of everything. There was no division of the staff into phgysicists and engineers. The ethos of engineering is very different from that of physics. A good physicist is a man with original ideas. A good engineer is a man who makes a design that works with as few original ideas as possible."

There's a lot of truth to that. So in which category is work in medicinal chemistry? The answer isn't immediately obvious, especially for people just starting out in the business. In graduate school, the emphasis is (rightly) on the pure science: as many original ideas as possible (as long as you can get them to work, one way or another.) So when freshly coined PhDs or post-docs join a drug company, they're sometimes under the illusion that unusual new chemistry is what's called for at every opportunity.

And nothing could be further from the truth. From an organic chemistry standpoint, medicinal chemistry can be downright boring. The sooner that new researchers figure that out, the better off they are. You can do perfectly respectable medicinal chemistry using nothing but reactions and ideas from an undergraduate textbook. (As I've pointed out, those reactions got to be classics because they tend to work, which is just what you need.)

The point of medicinal chemistry isn't chemistry; that's just the means to the end. We do just as much cutting-edge chemistry as we have to, and no more. That stuff takes a lot of time to figure out - and we have plenty of other problems that are waiting to take plenty of our time. The chemistry had better just quietly work for the most part, if you're going to have a chance.

The original ideas come when it's time to decide what molecules to make, and when it's time to figure out why you're getting the biological effects from them that you are. In those areas, we'll take all the original thinking that anyone can provide. Any weird brainstorms about how to make a compound more potent or more selective are welcome. And if making those new molecules calls for nothing more than ancient reactions, yawners that bore the pants off everyone who does them, then so much the better: that means that the molecules will be made quickly and in a good quantity. (One of the worst binds you can be caught in is to have a wonderful lead structure that you can't find a way to make enough of.)

So, when it comes to chemistry, we're engineers. When it comes to medicines, though, we'd better be the next best thing to poets.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

December 16, 2002


Email This Entry

Posted by Derek

Now that I've had a chance to look over the Wall Street Journal's article on Bristol-Meyers Squibb, it occurs to me that I've seen this behavior many times. I don't mean the financial voodoo (although I've seen that at second hand, just like anyone else who pays attention to the markets.) What I mean is the attitude that leads to it.

BMS got into this trouble because they promised 12% sales growth, right into the headwind of patent expirations on things like Glucophage and Taxol. The failure of Vanlev took them by surprise, just like it took everyone else, but it's not like they didn't know that those patent expirations were coming. Promising that kind of growth was arrogant and completely unrealistic, and that's the attitude I'm talking about.

Now, there are plenty of arrogant scientists in this world. But we're all supposed to look at the data and be willing to listen to what it says, even if we don't like it. You really can't make it in science unless you're willing to do that, and it keeps you from getting as full of yourself as you might otherwise get. Some of your ideas are just not going to work, because the universe isn't set up to let them work.

But there's a worldview common to athletic coaches, motivational speakers, and CEOs, and it says that failure is not an option. If you do fail, then you obviously didn't have the killer instinct, the grit, the tenacity, the fire in the belly. You just didn't want it enough. Sound familiar? This outlook can work when you're dealing with things that can be browbeaten (like other people.) It might work on scientists, but a lot of good that'll do, because it won't work on their science.

Now, it's true that your researchers need to be motivated, and need to keep pushing to accomplish things. That's why, in spite of all its inefficient craziness, I think that having dozens of drug companies fighting it out is a good thing, because it keeps us all on our toes. And you need tenacity to do good research, because most good ideas only get around to working after about the eighteenth try. These things are necessary - my point is that they're not sufficient.

And that's where this hard-charging attitude breaks down. It's all very well to yell at the sales force - it might even stir them up, until they all go find other jobs, since they're people dealing with other people. But are you really going to do that with the researchers and patent lawyers, with the clinical teams and the bioinformatics people? I've sat through meetings where people tried, and if eye-rolling made a noise, you wouldn't have been able to hear yourself think.

No amount of table-pounding will make a clinical trial turn out the way you want it to. No talk is tough enough to convince a protein assay to tell you that your compounds are active. You can rant at the rats all you want, but they'll continue to do whatever pleases them, and it's up to you to figure out what it all means. The physical universe cannot be sweet-talked, conned, or intimidated. It is what it is, and it does what it does.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

December 2, 2002

Europe, Again

Email This Entry

Posted by Derek

Stephen den Beste has a good article about European innovation in science and technology. Well, actually, it's about the lack of it, as a symptom of the increasing differences between the US and Western Europe in general.

Along the way, he mentions the bright spots in what he calls a "high-tech disaster area," among them the Swiss pharmaceutical industry. That's on target, although Roche, for one, hasn't had any big news in a while. (They have had the nerve to commercialize T-20 as an HIV therapy, which may end up being a little too innovative - see "Better Them Than Me" on August 8.) And I've had a quote posted over my desk from Andrea Vasella, chairman of Novartis, for several years now. In an interview, he said "If you don't want to spend the big money and take the big risks, you shouldn't be in the pharmaceutical business," which is unimprovably correct.)

But note that Roche's drug is being produced in Colorado, not Basel. And note that Novartis is expanding their drug discovery research in a huge new facility, but not in Basel. It's in the former Necco wafer factory in Cambridge, MA.

Why are they doing that? Unfortunately for Europe, the reason is of a piece with the rest of den Beste's article, and the one he refers to. It's unavoidable: America is home to most of the innovative drug discovery research in the world today. Foreign companies, almost without exception, really can't be considered major forces without a US research presence. For example, the only European country that can be considered a pharmaceutical rival to Switzerland is Britain, and its companies - GlaxoSmithKline and the half-Swedish AstraZeneca - have huge US research operations. Germany and France aren't quite in the same league, but their biggest companies (Bayer, Aventis) do plenty of work here, too.

People who want to do this kind of research at the highest level have a good chance of either ending up here, or seriously considering it. It's not the Europe doesn't have plenty of smart and capable people (a point den Beste also makes.) It's just that there are plenty of Europe's top people over here, compared to how many of America's best are over there.

Why does research seem to thrive more in the US? You can talk about the money that's spent here, but some of that money has put into research because of its historical payoff, leaving you with the same question to answer. I think that there are common American attitudes which turn out to be crucial for successful scientific research: a tolerance for risk-taking, a willingness to try out ideas that might sound unworkable, and a persistence in trying to find solutions, one way or another. And there's another important attitude that isn't often given its due. Andrew Sullivan refers to it in his Thanksgiving essay when he relates a story that British journalist Henry Fairlie used to tell:

He was walking down a suburban street one afternoon in a suit and tie, passing familiar rows of detached middle-American dwellings and lush, green Washington lawns. In the distance a small boy - aged perhaps six or seven - was riding his bicycle towards him.

And in a few minutes, as their paths crossed on the pavement, the small boy looked up at Henry and said, with no hesitation or particular affectation: "Hi." As Henry told it, he was so taken aback by this unexpected outburst of familiarity that he found it hard to say anything particularly coherent in return. And by the time he did, the boy was already trundling past him into the distance.

In that exchange, Henry used to reminisce, so much of America was summed up. That distinctive form of American manners, for one thing: a strong blend of careful politeness and easy informality. But beneath that, something far more impressive. It never occurred to that little American boy that he should be silent, or know his place, or defer to his elder. . .

That's it, right there: we don't know our place, and it's a good thing. Fairlie was right to pick up on this, and to celebrate it. It's important in manners, in politics, and in science as well. No groundbreaking work was ever done by anyone who knew their place in the world and was completely content with it. You have to feel that there's something missing from your knowledge, something that needs to be figured out. And no major scientific advances have come from people who deferred at all times to their elders, either. Such advances necessarily involve things that said elders never thought of (at best,) or things that show up their omissions and mistakes (at worst.)

I've talked about this with colleagues from France, Germany, Italy and other countries. Even among people who disagree with me on many other social and political points, the American primacy in science has been unquestioned, as has its connection with our culture. We're an odd bunch, and it's to our benefit.

I did my post-doctoral work in Germany, myself, so I can end this with a little personal history. At one point there, I was doing some photochemistry. My fellow chemists know that there are quite a few degrees you can go to with those reactions. Running them in (expensive) quartz glassware next to your ultraviolet lamp is one extreme, since quartz lets it all through and spares not. And from there you go down through various filters, progressively cutting out the hard short wavelengths until you get to the mildest light that'll still do what you want.

I tried my chemistry first in plain quartz, and cooked my poor reaction to a rich brown in no time. I needed an intermediate-cutoff filter, but there were none to be had (and we weren't about to spend the money on one, either, a situation common to academic labs the world over.) I found some old literature, though, that suggested some silver salt solutions that would do the job - not great, but a lot better than nothing. Silver salts, we had. The best way to use them was to have them in the chilled water that circulates in the jacket past the blazing hot UV lamp.

But we had no pump for the job. And when I suggested one to our grad student in charge of ordering supplies, he looked grave and said, yes, perhaps we could do that, but of course it would take several weeks even if we could spend the money, yes. . .I was already out the door, heading to my car, and heading to the shopping district in the center of town. I found a pump just where I thought I might. So I paid for it out of my own pocket, drove right back to the lab, hooked things up and within the hour was merrily photolyzing away.

The system was purring along when the supply guy came by to see what was up. I told him that I had a pump now, no need to order anything, thanks and all that. . .when I noticed him looking at my reaction setup with a puzzled expression. "Where did you find this pump?" he asked. I just pointed to light shield I'd rigged up, a piece of cardboard decorated with drawings of bright tropical fish. "Why, from the pet store," I told him, "where else?"

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

November 21, 2002

The Future and Its Friends

Email This Entry

Posted by Derek

There's now a nice review of Timothy Ferris's new book Seeing in the Dark by Freeman Dyson, who's a scientific hero of mine. (That's a Salon link, so get it while you can. . .) The theme of the book is how amateur astronomers are more and more able to make contributions at the forefront of the science. (They've always been doing so, of course, but it's gotten much more possible in the last ten or twenty years. The areas that amateurs can have an impact in have expanded, as well.)

Dyson goes on to talk about the division between fact-gathering and theorizing, and how the balance between the two changes as a science matures:

"It appears that each science goes through three phases of development. The first phase is Baconian, with scientists exploring the world to find out what is there. In this phase, amateurs and butterfly collectors are in the ascendant. The second phase is Cartesian, with scientists making precise measurements and building quantitative theories. In this phase, professionals and specialists are in the ascendant. The third phase is a mixture of Baconian and Cartesian, with amateurs and professionals alike empowered by the plethora of new technical tools arising from the second phase. In the third phase, cheap and powerful tools give scientists of all kinds freedom to explore and explain."

This sounds about right to me. Dyson also makes a point about how Eastern and Western approaches stalled out for many centuries through an imbalance in these approaches: in the West, theorizing held sway and grubbing for facts was seen as irrelevant (think of the hold of the old Greek texts and of religion.) In the East, the Chinese and the Islamic world accumulated a good deal of interesting data, and happened on some incidental technology along the way, but didn't spend much time trying to develop theories that could have extended the research.

(Incidentally, I've seen this imbalance at work in my own field. One research project I worked on was run under conditions where you had to have a rationale for almost any new compound series you tried. I spent most of my time, like everyone else, exploring around things that we knew worked well, but I always reserved time for trying out things just because no one had tried them before. Unfortunately, messing with some part of the molecule just because we had no idea of what would happen wasn't seen as a good enough reason - you had to have some theoretical underpinning. Arrogant foolishness, considering what the theoretical state of medicinal chemistry is like.)

So where do the various sciences stand in their development? Dyson again:

"Astronomy, the oldest science, was the first to pass through the first and second phases and emerge into the third. Which science will be next? Which other science is now ripe for a revolution giving opportunities for the next generation of amateurs to make important discoveries? Physics and chemistry are still in the second phase. It is difficult to imagine an amateur physicist or chemist at the present time making a major contribution to science. Before physics or chemistry can enter the third phase, these sciences must be transformed by radically new discoveries and new tools."

He's got that right. At the moment, you really need some serious equipment to go after most of the unusual stuff in either field, but I have to say that he might be on to something. Chemical instrumentation is becoming smaller and more self-sufficient all the time, and if the trend continues, it's possible to imagine a wealthy amateur having a high-field NMR and HPLC-mass spec capabilities in the basement. Zoning laws permitting, of course. (Actually, that sounds like a lot of fun, but maybe it's just the "wealthy" part I'm thinking of.)

Dyson's own bet for the next science to shift is biology, and I think the point is inarguable. It's nearly a cliche in the field to be amazed at how far it's come: for years now, high school students have been doing experiments that would have frizzed the hair of the 1975 Asilomar participants. You can do PCR in your kitchen, if you're so minded. The molecular biology supply companies have been steadily making everything more out-of-the-box, selling kits and systems designed both to make the lab worker's job more easy, and to make the companies more money. (There's Adam Smith's invisible hand for you. . ."It is not from the benevolence of the vendors of DNA primers that we expect the success of our hybridizations, but from their regard to their own interest.")

Dyson pictures legions of homegrown DNA tinkerers, a vision I find simultaneously thrilling and alarming. That's the authentic feeling of the future, though - it's hard for me to trust any substantial prediction that doesn't bring on those emotions. He's probably right, and we'd better keep on learning how to deal with it.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

May 6, 2002

So What's A Worthwhile Problem, Anyway?

Email This Entry

Posted by Derek

My last post naturally leads to that question. I can only speak for my own specialties, organic and medicinal chemistry. An example of really worthwhile problems in the former would be (to pick a few at random): how to form quaternary carbon chiral centers, how to get metal-catalyzed couplings to work more generally and reproducibly, a new inexpensive method to make unnatural amino acids, or a way to turn the nitroaldol reaction into something generally useful.

Examples of worthwhile problems in the latter field would be: how to make good phosphatase inhibitors, how to predict better what sorts of compounds will be absorbed out of the gut into the bloodstream, how to make new things that can substitute for a peptide bond, or how to approach compounds that interfere with protein-DNA interactions.

It's not like no one's worked on these; there are ideas and partial solutions to most of them. But a real advance in any of these areas would be welcomed by plenty of people, and recognized as a significant achievement.

Making lists like that is easy. What about things that have no obvious use? I'm still in favor of those, because the history of science has shown over and over that you can never tell what oddities may turn out to be useful. There's a lot of curiosity-driven research that gets done on projects like these.

So what isn't worth doing? Doing something that's already been done, because everyone's doing it or because you can't think of anything else for one. Doing things that (even if they worked) have already been superseded by techniques available when you started.

And examples of those? Here's where I bring in the fan mail! Things in organic synthetic chemistry that I wouldn't consider worth the effort might include: total syntheses of large natural products that add no new methods to the literature, adding yet another Lewis acid to the long list of the Lewis acids that can be used to, say, form acetals from aldehydes, or similarly coming up with yet another way to dehydrate an aldoxime to a nitrile. But you can pick up chemical journals from the last year and find all these things, and likely worse.

I'll forebear, for now, listing things that I don't think are worth doing in medicinal chemistry, for fear that I'll go to work tomorrow and find that someone wants to do one of them. My point is that many of these dud problems could nonetheless occupy your time, have their high and low points, their challenges and solutions, just like a real research project. If you didn't know better, you'd think you were doing something useful. You could spend nights and weekends on some of these things, and to the untrained eye you'd be getting an awful lot of work done. But to no point.

Of course, one reason I can have this attitude is that I've spent time on such things myself. It's only with time that I've come to see that science is so intrinsically tricky and interesting that almost anything can fill your hours and engage your mind. But isn't it better to find yourself getting interested in something that, someday, someone else might find interesting, too?<

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

May 2, 2002

Anything Worth Doing. . .

Email This Entry

Posted by Derek

There are several types of questions in science. You could plot them on a graph, with axes labled "Important / Trivial," "Hard to Answer / Easy to Answer," to pick two useful distinctions. Note that those don't always correlate as well as you'd think. There have been profound scientific questions that turned out to be surprisingly easy to put to the test, once someone figured out the conceptual framework.

And, more controversially, there are problems that (as far as you can tell) aren't worth the effort it would take to work on them. That's what Peter Medawar meant when he advised working on hard problems, not necessarily just interesting ones. Almost any problem can be interesting, including a lot of trivial time-wasters. It's sorting those out that can be troublesome.

For one thing, sometimes things that look trivial turn out to be important. And science lives on incremental results, and by making connections between things that no one thought were related. But in many cases, you can restate the problem to show why something is worthwhile. Take Fleming and penicillin:

"This stuff landed on my petri dish and killed my bacteria - I'm going to find out what it is."
"Who cares? It's spoiled. Clean it out and get on with your work."
"But something killed off these bacteria, and it looks like this mold may have secreted it. Wouldn't that be useful, to have something that can kill bacteria?"

There's a lot more to the traditional tale of this discovery - we'll come back to that. But it illustrates the point that problems can often be presented in a way that shows why they're worth working on. Of course, you can take work that isn't worth doing and try to present it this way, too (look at some grant applications!), but you can usually spot the seams and stitches that had to be added.

And I can't deny that there have been important results that have been ignored when they first came out. But in most of those cases, they've been ignored because they weren't believed, nd they weren't believed because their (potential) importance wasn't in doubt.

I'm not suggesting that researchers shouldn't follow their own curiosity, or that we should have some sort of central review to tell us what's important and what isn't. You couldn't pay to advocate either of those positions; they're disastrous. But what I'm suggesting is that researchers should sharpen their own instincts, and put their curiosity to the best possible use. More on this as the week goes on.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why

February 20, 2002

Giordano Bruno

Email This Entry

Posted by Derek

I missed a chance yesterday to note an anniversary. Giordano Bruno was something of a crank, not normally the sort of person I'd be commemorating. But in his time, it didn't take very much to be considered either of those, or worse, and we have to make allowances.

He was headstrong. We can see now that he was sometimes eerily right, other times totally wrong. Either way, many of these strongly held positions were sure sources of trouble for anyone who advocated them. All living things were made up of matter, and that matter was the same across the universe - that one was not going to go over well in the late 16th century.

There was more. The stars, he said, were nothing more than other suns, and our sun was nothing more than a nearby star. He saw no reason why these other suns should not have planets around them, and no reason why those planets should not have life: "Innumerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds."

He went on at length. And as I said, much of it was, by scientific standards, mystical rot. His personality was no help whatsoever in getting his points across. He appears to have eventually gotten on the nerves of everyone he dealt with. But no one deserves to pay what he did for it all.

Bruno was excommunicated and hauled off in chains. He spent the next several years in prison, and was given chances to recant up until the very end. He refused. On February 19th, 1600, he was led into the Campo dei Fiori plaza in Rome, tied to a post, and burned to death in front of a crowd.

Mystic, fool, pain in the neck. I went out tonight to see Saturn disappear behind the dark edge of the moon, putting the telescope out on the driveway and calling my wife out to see. Then I came inside, sat down at my computer, wrote exactly what I thought, and put it out for anyone who wanted to read it around the world. While I did all that, I remembered that things haven't always been this way, haven't been this way for long at all, actually. And resolved to remember to enjoy it all as much as I can, and to remember those who never got to see it.

Comments (0) + TrackBacks (0) | Category: Who Discovers and Why