Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
One of the authors of a paper I commented on has shown up in the comments section to that post, and I wanted to highlight his reply out here on the front page of the blog. Here's J. R. Brender, from the Michigan side of the authorship:
Hi. I appreciate the comments the given about the paper. As one of the authors of the paper (with Ramamoorthy on the NMR part), I would like to clear a few things as time permits.
@ Derek An uncharitable view would be that they have also taken aim at the year 1995, which is about when all three of these ideas were also being worked on for AD.
All three are still be working on and are in (mostly mixed or unsuccessful) clinical trials. Vitamin E in particular went through a phase III clinical trial for mild to moderate Alzheimer's with mixed results http://www.alzforum.org/news/research-news/trial-suggests-vitamin-e-protects-function-mild-alzheimers
To be fair, none of the other hypotheses have much support either.
@19 from Bob "The paper only uses the word drug once, in the context of including "drug-likeness" as a designed property, and therapeutics once in the conclusion."
Correct. I wasn't aware that at any point we claimed that this was a therapeutic or even a lead compound for a therapeutic. The discussion about drug discovery in academia vs. industry, while interesting, is in my opinion somewhat off-topic. A more relevant question is whether it is worth investigating one compound with a detailed approach (which you are going to have do if you want in any kind of mechanism based inhibitor) or try a high-throughput non-mechanistic approach phenotypic screening. I'm agonist on this point and i think both are viable (or a maybe both non-viable options). Large scale phenotypic screening for Alzheimer's is going to exceed the resources of academic lab. Based on the amount of money spent on pharma and the current success rate, I suspect its been tried on some level and failed at relatively early stage.
@21 from JSR "If the end result of months or years of work by 14 authors and almost as many sources of funding...
The non mass spec work (the bulk of the paper) was supported by a single R21 and a private foundation grant of which this paper is a small part.
@21 from JSR "not ready to publish, especially not in the once hallowed pages of JACS."
"MedChem journals likely would have asked that more work be done to answer some of the same questions Derek raised."
@35 "I’d add ‘who partners with someone who knows how to build / run relevant screening assays"
There are no relevant high-throughput screening assays for amyloid inhibition in common use. This point in particular I would like to stress and is the reason (as one of the commenters guessed) we left some of the expected the out of the paper. A very high percentage of the papers in JACS and J. Med. Chem on amyloid inhibitors consist of a set of compounds with only three sets of data. A high-throughput thioflavin T assay to measure amyloid inhibition, a set of EM images to show amyloid disappearing, and an MTT assay. There is very rarely any kind of pharmokinetics often not even to the extant of calculating drug-likedness (if you don't believe me look up amyloid inhibitor on basically any journal including the med chem ones). Though usually not acknowledged, ThT assay has a very high false positive rate since ThT generally binds at the same site as the inhibitor. Although not in the paper, we have shown this is true for the compound in the paper and many others. EM images suffer from multiple issues due to bias in binding to the grid, selection bias in sampling etc. The MTT assay has a sensitivity problem as suggested, and is not ideal for amyloid for a variety of other reasons.
The conformational antibodies sometimes used are also pretty non-specific, although this is only occasionally acknowledged in the literature. The end result is a lot of compounds with apparently quantifiable information that really isn't. There is no information on where the compound binds and what it binds to (amyloid beta is a mixture of many different, rapidly equilibrating species even when it is claimed to be in a single form).
If you have experience in high-throughput screening, I urge you to team up with an amyloid person (there are many amyloid specific factors that need to be considered). The field desperately needs you. Also, if you know of compounds for which reliable PK data has been obtained let me know (jbrender at umich.edu). I am compiling a database of amyloid inhibitors and an discouraged at what I am finding.
Our goal in the Ramamoorthy NMR lab in particular was to take a single compound and analyze its binding on low MW and fibrillar Abeta , using a labor intensive approach with the aim of developing a future high throughput fluorescence based approach to isolate specific interactions with different Abeta species (some unpublished progress has been made on the fluorescence work).
The study is only one of handful that have identified specific interactions in terms of a structure of Abeta (the new structure we have is the only high-resolution structure not in detergents in organic solvents). ML binds at a specific site on the structure, and looking back at the literature, you can see a similar binding site for many of the compounds in the literature. That to me at least is interesting.
In conclusion, it is not a complete story by any means, just a progress report. But a complete story with Abeta and Alzheimer's is going to take a very long time.
Note: I'm turning off comments here, so they can continue to thread in the previous post. I may have some more to say on this myself, but I'll leave that to another entry.
A reader sent along this paper that's come out recently in JACS, from a Michigan/South Korea/UCSB team of researchers. It's directed towards a possible therapeutic agent for Alzheimer's disease. They're attempting to build a molecule that binds beta-amyloid, coordinates metals, and has antioxidant properties all at the same time.
An uncharitable view would be that they have also taken aim at the year 1995, which is about when all three of these ideas were also being worked on for AD. But it's not like the field has cleared up too many of these questions since then, so perhaps that gets a pass, although it should be noted (but isn't in the paper) that no one has ever been able to find any significant effect on Alzheimer's from treatment with either antioxidants or metal chelators. The debate on whether anyone has been able to see anything significant with agents targeting amyloid is still going on (and how).
I bring that up partly for mechanistic plausibility, and partly because of the all-in-one aspect of the molecule that the paper is studying. Any such drug candidate has to justify its existence versus a mixture of therapies given simultaneously, especially since the odds are that it will not be as efficacious against all (or even any) of its subtargets compared to a cocktail of more specific agents. With Alzheimer's, it's tempting to say that well, we're hitting all three of these mechanisms at once, so that has to be a good thing. But are all three of them equally important? The fraction of your compound that's binding amyloid is presumably not available to serve as an antioxidant. The ones that have chelated metals are not available to bind amyloid, and so on.
Most of the paper details experiments to show that the ligand does indeed bind amyloid, both in the soluble form and as fibrils. But there's room to argue there, too. Some in the field think that altering the distribution between those populations could be important (I'm agnostic on this point, as I am about amyloid in general). If you're binding to all of them, though, what happens? There's information on the compound's effect on amyloid oligomerization, but the connection between that and Alzheimer's pathology is also up for argument. These questions, already complicated, are made harder to think about by the absence of any quantitative binding data in the paper - at least, if it's there, I'm not seeing it yet. There are mass spec, LC, and NMR experiments, but no binding constants.
There's also little or no SAR. You'd almost get the impression that this was the first and only compound made and tested, because there's nothing in the main body of the paper about any analogs, other than a comparison to a single quinolinemethanol. Even without binding data, some qualitative comparisons might have been made to see how the amyloid binding responded to changes in the structure, as well as how it balanced with the metal-binding and antioxidant properties.
There's some cell-assay data, viability in the presence of amyloid (with and without metals), and it looks like under A-beta-42 conditions the cells are about 70% viable without the compound, and around 90% with it. (It also looks like the cell viability is only in the lower 80% range just when the compound alone is added; I don't know what the background viability numbers are, because that control doesn't seem to be in there). They also tried the same neuroblastoma line with the Swedish-mutation APP in it (a huge risk factor for an early-onset form of human Alzheimer's), but I can't see much difference in the compound's effects.
But as with any CNS proposal, the big question is "Does the compound get into the brain?" The authors, to their credit, do have some data here, but it's puzzlingly incomplete. They show plasma and brain levels after oral gavage (10 mpk) in CD1 mice, but only at one time point, five minutes. That seems mighty early for an oral dose, at least to me, and you really, really want to see a curve here rather than one early time point. For what it's worth, plasma levels were around 6 ng/g and brain levels were around 14 ng/g at that point, but since this was just done by brain homogenate, it's unclear if the compound really gets in or not. No other tissues were examined.
There also don't seem to be any data on what else this compound might do. If you're seriously proposing it as a possible therapy for Alzheimer's, or as a starting point for one, it would be worthwhile to collect some numbers in selectivity screens. Alternatively, if you're not proposing this as a starting point for Alzheimer's therapy, then why do all this work in the first place (and why write it up for JACS)? This is another one of those cases where I'm honestly baffled by what I'm reading. My industrial perspective sees a single compound given a very labor-intensive in vitro workup on a hazy therapeutic rationale, with no analogs, no selectivity data, and no PK other than one time point, and I just shrug my shoulders with a puzzled look on my face. Why do it?
Well, universities aren't drug companies. And the groups involved are, presumably, not focused on making the next big Alzheimer's breakthrough. But what are they focused on? Training students? That's a really worthwhile goal, but I have to wonder if some way could have been found to train them that would have been a bit more congruent with the real world. Picking three rationales, thinking up a single compound to try to combine them, and then spending all your effort on it as if it's a real lead isn't (to my mind) a good fit. I realize that resources are limited, and that this same level of effort just couldn't have been applied to a whole series of compounds the way it would in an industrial setting (not that we'd have done it). But if you're going to do this stuff, a less-intense look at the amyloid-aggregating and cellular effects of a wider series of compounds could have been more valuable than a lot of information about just one.
I feel bad every time I write like this about academic drug-discovery papers, but I can't help it. From my perspective, there's a lot of confusion out there about what drug discovery really entails, and about the relative value of doing a little of it, or doing it in an odd way.
I've got to take my, uh, hat off to this idea. Rebecca Schuman at Missouri-St. Louis, who writes frequently on academic hiring, made an offer late last week that directly addresses the problem that many aspiring faculty members find themselves facing: search committees apparently want bushels of stuff. And the strong suspicion is that they really don't look at most of it - they just want to see you sending it.
So she simply offered to pay $100 to the first two people who submit proof that they enclosed a scan of their butt among their supporting documents. This had to be a legitimate application, and she (wisely) set herself up as the sole judge of whether the enclosed material was, in fact, a scan of the applicant's rear end. (Some things are too important to be left to anyone else).
The "Buttscan" idea took off in a big way, and by gosh, there's already a winner. I must admit, although I've never applied for an academic position, that I can see the appeal. At a previous job I found myself having to write lengthy reports every six months about what I and my lab had been up to, and I always wanted to include, smack in the middle of yet another paragraph about SAR trends, an offer to pay $5 to the first person who told me that they'd read that far. But I never had the nerve, sadly. On a related note, a former colleague of mine once threatened to slip into my office while my semi-annual report document was open on my computer, and slip the phrase "Help, I'm a woman trapped in a man's body!" into it. But no one would probably have read that one, either. . .
A longtime reader sent along this article from the journal Technological Forecasting and Social Change, which I'll freely admit never having spent much time with before. It's from a team of European researchers, and it's titled "Big Pharma, little science? A bibliometric perspective on Big Pharma's R&D decline".
What they've done is examine the publication record for fifteen of the largest drug companies from 1995 to 2009. They start off by going into the reasons why this approach has to be done carefully, since publications from industrial labs are produced (and not produced) for a variety of different reasons. But in the end:
Given all these limitations, we conclude that the analysis of publications does not in itself reflect the dynamics of Big Pharma's R&D. However, at the high level of aggregation we conduct this study (based on about 10,000 publications per year in total, with around 150 to 1500 publications per firm annually) it does raise interesting questions on R&D trends and firm strategies which then can be discussed in light of complementary quantitative evidence such as the trends revealed in studies using a variety of other metrics such as patents and, as well as statements made by firms in statutory filing and reports to investors.
So what did they find? In the 350 most-represented journals, publications from the big companies made up about 4% of the total content over those years (which comes out to over 10,000 papers). But this number has been dropping slightly, but steadily over the period. There are now about 9% few publications from Big Pharma than there were at the beginning of the period. But this effect might largely be explained by mergers and acquisitions over the same period - in every case, the new firm seems to publish fewer papers than the old ones did as a whole.
And here are the subject categories where those papers get published. The green nodes are topics such as pharmacology and molecular biology, and the blue ones are organic chemistry, medicinal chemistry, etc. These account for the bulk of the papers, along with clinical medicine.
The number of authors per publication has been steadily increasing (in fact, even faster than the other baseline for the journals as a whole), and the organizations-per-paper has been creeping up as well, also slightly faster than the baseline. The authors interpret this as an increase in collaboration in general, and note that it's even more pronounced in areas where Big Pharma's publication rate has grown from a small starting point, which (plausibly) they assign to bringing in outside expertise.
One striking result the paper picks up on is that the European labs have been in decline from a publication standpoint, but this seems to be mostly due to the UK, Switzerland, and France. Germany has held up better. Anyone who's been watching the industry since 1995 can assign names to the companies who have moved and closed certain research sites, which surely accounts for much of this effect. The influence of the US-based labs is clear:
Although in most of this analysis we adopt a Europe versus USA comparative perspective, a more careful analysis of the data reveals that European pharmaceutical companies are still remarkably national (or bi-national as a results of mergers in the case of AstraZeneca and Sanofi-Aventis). Outside their home countries, European firms have more publications from US-based labs than all their non-domestic European labs (i.e. Europe excluding the ‘home country’ of the firm). Such is the extent of the national base for collaborations that when co-authorships are mapped into organisational networks there are striking similarities to the natural geographic distribution of countries. . .with Big Pharma playing a notable role spanning the bibliometric equivalent of the ‘Atlantic’.
Here's one of the main conclusions from the trends the authors have picked up:
The move away from Open Science (sharing of knowledge through scientific conferences and publications) is compatible and consistent with the increasing importance of Open Innovation (increased sharing of knowledge — but not necessarily in the public domain). More specifically, Big Pharma is not merely retreating from publication activities but in doing so it is likely to substitute more general dissemination of research findings in publications for more exclusive direct sharing of knowledge with collaboration partners. Hence, the reduction in publication activities – next to R&D cuts and lab closures – is indicative of a shift in Big Pharma's knowledge sharing and dissemination strategies.
Putting this view in a broader historical perspective, one can interpret the retreat of Big Pharma from Open Science, as the recognition that science (unlike specific technological capabilities) was never a core competence of pharmaceutical firms and that publication activity required a lot of effort, often without generating the sort of value expected by shareholders. When there are alternative ways to share knowledge with partners, e.g. via Open Innovation agreements, these may be attractive. Indeed an associated benefit of this process may be that Big Pharma can shield itself from scrutiny in the public domain by shifting and distributing risk exposure to public research organisations and small biotech firms.
Whether the retreat from R&D and the focus on system integration are a desirable development depends on the belief in the capacities of Big Pharma to coordinate and integrate these activities for the public good. At this stage, one can only speculate. . .
Here's more on the problems with non-reproducible results in the literature (see here for previous blog entries on this topic). Various reports over the last few years indicate that about half of the attention-getting papers can't actually be replicated by other research groups, and the NIH seems to be getting worried about that:
The growing problem is threatening the reputation of the US National Institutes of Health (NIH) based in Bethesda, Maryland, which funds many of the studies in question. Senior NIH officials are now considering adding requirements to grant applications to make experimental validations routine for certain types of science, such as the foundational work that leads to costly clinical trials. As the NIH pursues such top-down changes, one company is taking a bottom-up approach, targeting scientists directly to see if they are willing to verify their experiments. . .
. . .Last year, the NIH convened two workshops that examined the issue of reproducibility, and last October, the agency’s leaders and others published a call for higher standards in the reporting of animal studies in grant applications and journal publications. At a minimum, they wrote, studies should report on whether and how animals were randomized, whether investigators were blind to the treatment, how sample sizes were estimated and how data were handled.
The article says that the NIH is considering adding some sort of independent verification step for some studies - those that point towards clinical trials or new modes of treatment, most likely. Tying funding (or renewed funding) to that seems to make some people happy, and others, well:
The very idea of a validation requirement makes some scientists queasy. “It’s a disaster,” says Peter Sorger, a systems biologist at Harvard Medical School in Boston, Massachusetts. He says that frontier science often relies on ideas, tools and protocols that do not exist in run-of-the-mill labs, let alone in companies that have been contracted to perform verification. “It is unbelievably difficult to reproduce cutting-edge science,” he says.
But others say that independent validation is a must to counteract the pressure to publish positive results and the lack of incentives to publish negative ones. Iorns doubts that tougher reporting requirements will make any real impact, and thinks that it would be better to have regular validations of results, either through random audits or selecting the highest-profile papers.
I understand the point that Sorger is trying to make. Some of this stuff really is extremely tricky, even when it's real. But at some point, reproducibility has to be a feature of any new scientific discovery. Otherwise, well, we throw it aside, right? And I appreciate that there's often a lot of grunt work involved in getting some finicky, evanescent result to actually appear on command, but that's work that has to be done by someone before a discovery has value.
For new drug ideas, especially, those duties hae traditionally landed on the biopharma companies themselves - you'll note that the majority of reports about trouble with reproducing papers comes from inside the industry. And it's a lot of work to bring these things along to the point where they can hit their marks every time, biologically and chemically. Academic labs don't spend too much time trying to replicate each other's studies; they're too busy working on their own things. When a new technique catches on, it spreads from lab to lab, but target-type discoveries, something that leads to a potential human therapy, often end up in the hands of those of us who are hoping to be able to eventually sell it. We have a big interest in making sure they work.
Here's some of the grunt work that I was talking about:
On 30 July, Science Exchange launched a programme with reagent supplier antibodies-online.com, based in Aachen, Germany, to independently validate research antibodies. These are used, for example, to probe gene function in biomedical experiments, but their effects are notoriously variable. “Having a third party validate every batch would be a fabulous thing,” says Peter Park, a computational biologist at Harvard Medical School. He notes that the consortium behind ENCODE — a project aimed at identifying all the functional elements in the human genome — tested more than 200 antibodies targeting modifications to proteins called histones and found that more than 25% failed to target the advertised modification.
I have no trouble believing that. Checking antibodies, at least, is relatively straighforward, but that's because they're merely tools to find the things that point towards the things that might be new therapies. It's a good place to start, though. Note that in this case, too, there are commercial considerations at work, which do help to focus things and move them along. They're not the magic answer to everything, but market forces sure do have their place.
The big questions, at all these levels, is who's going to do the follow-up work and who's going to pay for it. It's a question of incentives: venture capital firms want to be sure that they're launching a company whose big idea is real. The NIH wants to be sure that they're funding things that actually work and advance the state of knowledge. Drug companies want to be sure that the new ideas they want to work on are actually based in reality. From what I can see, the misalignment comes in the academic labs. It's not that researchers are indifferent to whether their new discoveries are real, of course - it's just that by the time all that's worked out, they may have moved on to something else, and it might all just get filed away as Just One Of Those Things. You know, cutting-edge science is hard to reproduce, just like that guy from Harvard was saying a few paragraphs ago.
So it would help, I think to have some rewards for producing work that turned out to be solid enough to be replicated. That might slow down the rush to publish a little bit, to everyone's benefit.
Here's an update on the NIH's NCATS program to repurpose failed clinical candidates from the drug industry. I wrote about this effort here last year, and expressed some skepticism. It's not that I think that trying drugs (or near-drugs) for other purposes is a bad idea prima facie, because it isn't. I just wonder about the way the way the NIH is talking about this, versus its chances for success.
As was pointed out last time this topic came up, the number of failed clinical candidates involved in this effort is dwarfed by the number of approved compounds that could also be repurposed - and have, in fact, been looked at for years for just that purpose. The success rate is not zero, but it has not been a four-lane shortcut to the promised land, either. And the money involved here ($12.7 million split between nine grants) is, as that Nature piece correctly says, "not much". Especially when you're going after something like Alzheimer's:
Strittmatter’s team is one of nine that won funding last month from the NIH’s National Center for Advancing Translational Sciences (NCATS) in Bethesda, Maryland, to see whether abandoned drugs can be aimed at new targets. Strittmatter, a neurobiologist at Yale University in New Haven, Connecticut, hopes that a failed cancer drug called saracatinib can block an enzyme implicated in Alzheimer’s. . .
. . .Saracatinib inhibits the Src family kinases (SFKs), enzymes that are commonly activated in cancer cells, and was first developed by London-based pharmaceutical company AstraZeneca. But the drug proved only marginally effective against cancer, and the company abandoned it — after spending millions of dollars to develop it through early human trials that proved that it was safe. With that work already done, Strittmatter’s group will be able to move the drug quickly into testing in people with early-stage Alzheimer’s disease.
The team plans to begin a 24-person safety and dosing trial in August. If the results are good, NCATS will fund the effort for two more years, during which the scientists will launch a double-blind, randomized, placebo-controlled trial with 159 participants. Over a year, the team will measure declines in glucose metabolism — a marker for progression of Alzheimer’s disease — in key brain regions, hoping to find that they have slowed.
If you want some saracatanib, you can buy some, by the way (that's just one of the suppliers). And since AZ has already taken this through phase I, then the chances for it passing another Phase I are very good indeed. I will not be impressed by any press releases at that point. The next step, the Phase IIa with 159 people, is as far as this program is mandated to go. But how far is that? One year is not very long in a population of Alzheimer's patients, and 159 patients is not all that many in a disease that heterogeneous. And the whole trial is looking at a secondary marker (glucose metabolism) which (to the best of my knowledge) has not demonstrated any clinical utility as a measure of efficacy for the disease. From what I know about the field, getting someone at that point to put up the big money for larger trials will not be an easy sell.
I understand the impulse to go after Alzheimer's - who dares, wins, eh? But given the amount of money available here, I think the chances for success would be better against almost any other disease. It is very possible to take a promising-looking Alzheimer's candidate all the way through a multi-thousand-patient multiyear Phase III and still wipe out - ask Eli Lilly, among many others. You'd hope that at least a few of them are in areas where there's a shorter, more definitive clinical readout.
Here's the list, and here's the list of all the compounds that have been made available to the whole effort so far. Update: structures here. The press conference announcing the first nine awards is here. The NIH has not announced what the exact compounds are for all the grants, but I'm willing to piece it together myself. Here's what I have:
One of them is saracatanib again, this time for lymphangioleiomyomatosis. There's also an ER-beta agonist being looked at for schizophrenia, a J&J/Janssen nicotinic allosteric modulator for smoking cessation, and a Pfizer ghrelin antagonist for alcoholism (maybe from this series?). There's a Sanofi compound for Duchenne muscular dystrophy, which the NIH has studiously avoided naming, although it's tempting to speculate that it's riferminogene pecaplasmide, a gene-therapy vector for FGF1. But Genetic Engineering News says that there are only seven compounds, with a Sanofi one doubling up as well as the AZ kinase inhibitor, so maybe this one is the ACAT inhibitor below. Makes more sense than a small amount of money trying to advance a gene therapy approach, for sure.
There's an endothelin antagonist for peripheral artery disease. Another unnamed Sanofi compound is being studied for calcific aortic valve stenosis, and my guess is that it's canosimibe, an ACAT inhibitor, since that enzyme has recently been linked to stenosis and heart disease. Finally, there's a Pfizer glycine transport inhibitor being looked at for schizophrenia, which seems a bit odd, because I was under the impression that this compound had already failed in the clinic for that indication. They appear to have some other angle.
So there you have it. I look forward to seeing what comes of this effort, and also to hearing what the NIH will have to say at that point. We'll check in when the time comes!
Update: here's more from Collaborative Chemistry. And here's a paper they published on the problems of identifying compounds for initiatives like this:
In particular, it is notable that NCATS provides on its website  only the code number, selected international non-proprietary names (INN) and links to more information including mechanism of action, original development indication, route of administration and formulation availability. However, the molecular structures corresponding to the company code numbers were not included. Although we are highly supportive of the efforts of NCATS to promote drug repurposing in the context of facilitating and funding proposals, we find this omission difficult to understand for a number of reasons. . .
They're calling for the NIH (and the UK initiative in this area as well) to provide real structures and IDs for the compounds they're working with. It's hard to argue against it!
Over at Forbes, John Osborne adds some details to what has been apparent for some time now: the drug industry seems to have no particular friends inside the Obama administration:
Earlier this year I listened as a recently departed Obama administration official held forth on the industry and its rather desultory reputation. . .the substance of the remarks, and the apparent candor with which they were delivered, remain fresh in my mind, not least because of the important policy implications that the comments reflect.
. . .In part, there’s a lingering misimpression as to how new medicines are developed. While the NIH and its university research grantees make extraordinary discoveries, it is left to for-profit pharmaceutical and biotechnology companies to conduct the necessary large scale clinical studies and obtain regulatory approval prior to commercialization. Compare the respective annual spending totals: the NIH budget is around $30 billion, and the industry spends nearly double that amount. While the administration has great affection for universities, non-profit patient groups and government researchers (and it was admirably critical of the sequester’s meat cleaver impact on government sponsored research programs), it does not credit the essential role of industry in bringing discoveries from the bench to the bedside.
Terrific. I have to keep reminding myself how puzzled I was when I first came across the "NIH and universities discover all the drugs" mindset, but repeated exposures to it over the last few years have bred antibodies. If anyone from the administration would like to hear what someone who is not a lobbyist, not a CEO, not running for office, and has actually done this sort of work has to say about the topic, well, there are plenty of posts on this blog to refer to (and the comments sections to them are quite lively, too). In fact, I think I'll go ahead and link to a whole lineup of them - that way, when the topic comes up again, and it will, I can just send everyone here:
There we go - hours of reading, and all in the service of adding some reality to what is often a discussion full of unicorn burgers. Back to Osborne's piece, though - he goes on to make the point that one of the other sources of trouble with the administration is that the drug industry has continued to be profitable during the economic downturn, which apparently has engendered some suspicion.
And now for some 100-proof politics. The last of Osborne's contentions is that the administration (and many legislators as well) see the Medicare Part D prescription drug benefit as a huge windfall for the industry, and one that should be rolled back via a rebate program, setting prices back to what gets paid out under the Medicaid program instead. Ah, but opinions differ on this:
It’s useful to recall that former Louisiana Congressman and then PhRMA head Billy Tauzin negotiated with the White House in 2009 on behalf of the industry over this very question. Under the resulting deal, the industry agreed to support passage of the ACA and to make certain payments in the form of rebates and fees that amounted to approximately $80 billion over ten years; in exchange the administration agreed to resist those in Congress who pressed for more concessions from the drug companies or wanted to impose government price setting. . .
Tauzin's role, and the deal that he helped cut, have not been without controversy. I've always been worried about deals like this being subject to re-negotiations whenever it seems convenient, and those worries are not irrational, either:
. . .The White House believes that the industry would willingly (graciously? enthusiastically?) accept a new Part D outpatient drug rebate. Wow. The former official noted that the Simpson-Bowles deficit reduction panel recommended it, and its report was favorably endorsed by no less than House Speaker Boehner. Apparently, it is inconceivable to the White House that Boehner’s endorsement of the Simpson-Bowles platform would have occurred without the industry’s approval. Wow, again. That may be a perfectly logical assumption, but the other industry representatives within earshot never imagined that they had endorsed any such thing. No, it’s clear they have been under the (naïve) impression that the aforementioned $80 billion “contribution” was a very substantial sum in support of patients and the government treasury – and offered in a spirit of cooperation in recognition of the prospective benefits to industry of the expanded coverage that lies at the heart of Obamacare. With that said, the realization that this may be just the first of several installment payments left my colleagues in stunned silence; some mouths were visibly agape.
This topic came up late last year around here as well. And it'll come up again.
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
Crowdfunding academic research might be changing, from a near-stunt to an widely used method of filling gaps in a research group's money supply. At least, that's the impression this article at Nature Jobs gives:
The practice has exploded in recent years, especially as success rates for research-grant applications have fallen in many places. Although crowd-funding campaigns are no replacement for grants — they usually provide much smaller amounts of money, and basic research tends to be less popular with public donors than applied sciences or arts projects — they can be effective, especially if the appeals are poignant or personal, involving research into subjects such as disease treatments.
The article details several venues that have been used for this sort of fund-raising, including Indiegogo, Kickstarter, RocketHub, FundaGeek, and SciFund Challenge. I'd add Microryza to that list. And there's a lot of good advice for people thinking about trying it themselves, including how much money to try for (at least at first), the timelines one can expect, and how to get your message out to potential donors.
Overall, I'm in favor of this sort of thing, but there are some potential problems. This gives the general pubic a way to feel more connected to scientific research, and to understand more about what it's actually like, both of which are goals I feel a close connection to. But (as that quote above demonstrates), some kinds of research are going to be an easier sell than others. I worry about a slow (or maybe not so slow) race to the bottom, with lab heads overpromising what their research can deliver, exaggerating its importance to immediate human concerns, and overselling whatever results come out.
These problems have, of course, been noted. Ethan Perlstein, formerly of Princeton, used RocketHub for his crowdfunding experiment that I wrote about here. And he's written at Microryza with advice about how to get the word out to potential donors, but that very advice has prompted a worried response over at SciFund Challenge, where Jai Ranganathan had this to say:
His bottom line? The secret is to hustle, hustle, hustle during a crowdfunding campaign to get the word out and to get media attention. With all respect to Ethan, if all researchers running campaigns follow his advice, then that’s the end for science crowdfunding. And that would be a tragedy because science crowdfunding has the potential to solve one of the key problems of our time: the giant gap between science and society.
Up to a point, these two are talking about different things. Perlstein's advice is focused on how to run a successful crowdsourcing campaign (based on his own experience, which is one of the better guides we have so far), while Ranganathan is looking at crowdsourcing as part of something larger. Where they intersect, as he says, is that it's possible that we'll end up with a tragedy of the commons, where the strategy that's optimal for each individual's case turns out to be (very) suboptimal for everyone taken together. He's at pains to mention that Ethan Perlstein has himself done a great job with outreach to the public, but worries about those to follow:
Because, by only focusing on the mechanics of the campaign itself (and not talking about all of the necessary outreach), there lurks a danger that could sink science crowdfunding. Positive connections to an audience are important for crowdfunding success in any field, but they are especially important for scientists, since all we have to offer (basically) is a personal connection to the science. If scientists omit the outreach and just contact audiences when they want money, that will go a long way to poisoning the connections between science and the public. Science crowdfunding has barely gotten started and already I hear continuous complaints about audience exasperation with the nonstop fundraising appeals. The reason for this audience fatigue is that few scientists have done the necessary building of connections with an audience before they started banging the drum for cash. Imagine how poisonous the atmosphere will become if many more outreach-free scientists aggressively cold call (or cold e-mail or cold tweet) the universe about their fundraising pleas.
Now, when it comes to overpromising and overselling, a cynical observer might say that I've just described the current granting system. (And if we want even more of that sort of thing, all we have to do is pass a scheme like this one). But the general public will probably be a bit easier to fool than a review committee, at least, if you can find the right segment of the general public. Someone will probably buy your pitch, eventually, if you can throw away your pride long enough to keep on digging for them.
That same cynical observer might say that I've just described the way that we set up donations to charities, and indeed Ranganathan makes an analogy to NPR's fundraising appeals. That's the high end. The low end of the charitable-donation game is about as low as you can go - just run a search for the words "fake" and "charity" through Google News any day, any time, and you can find examples that will make you ashamed that you have the same number of chromosomes as the people you're reading about. (You probably do). Avoiding this state really is important, and I'm glad that people are raising the issue already.
What if, though, someone were to set up a science crowdfunding appeal, with hopes of generating something that could actually turn a profit, and portions of that to be turned over to the people who put up the original money? We have now arrived at the biopharma startup business, via a different road than usual. Angel investors, venture capital groups, shareholders in an IPO - all of these people are doing exactly that, at various levels of knowledge and participation. The pitch is not so much "Give us money for the good of science", but "Give us money, because here's our plan to make you even more". You will note that the scale of funds raised by the latter technique make those raised by the former look like a roundoff error, which fits in pretty well with what I take as normal human motivations.
But academic science projects have no such pitch to make. They'll have to appeal to altruism, to curiosity, to mood affiliation, and other nonpecuniary motivations. Done well, that can be a very good thing, and done poorly, it could be a disaster.
Senator Ron Wyden (D-Oregon) seems to be the latest champion of the "NIH discovers drugs and Pharma rips them off" viewpoint. Here's a post from John LaMattina on Wyden's recent letter to Francis Collins. The proximate cause of all this seems to be the Pfizer JAK3 inhibitor:
Tofacitinib (Xeljanz), approved last November by the U.S. Food and Drug Administration, is nearing the market as the first oral medication for the treatment of rheumatoid arthritis. Given that the research base provided by the National Institutes of Health (NIH) culminated in the approval of Xeljanz, citizens have the right to be concerned about the determination of its price and what return on investment they can expect. While it is correct that the expenses of drug discovery and preclinical and clinical development were fully undertaken by Pfizer, taxpayer-funded research was foundational to the development of Xeljanz.
I think that this is likely another case where people don't quite realize the steepness of the climb between "X looks like a great disease target" and "We now have an FDA-approved drug targeting X". Here's more from Wyden's letter:
Developing drugs in America remains a challenging business, and NIH plays a critically important role by doing research that might not otherwise get done by the private sector. My bottom line: When taxpayer-funded research is commercialized, the public deserves a real return on its investment. With the price of Xeljanz estimated at about $25,000 a year and annual sales projected by some industry experts as high as $2.5 billion, it is important to consider whether the public investment has assured accessibility and affordability.
This is going to come across as nastier than I intend it to, but my first response is that the taxpayer's return on this was that they got a new drug where there wasn't one before. And via the NIH-funded discoveries, the taxpayers stimulated Pfizer (and many other companies) to spend huge amounts of money and effort to turn the original discoveries in the JAK field into real therapies. I value knowledge greatly, but no human suffering whatsoever was relieved by the knowledge alone that JAK3 appeared to play a role in inflammation. What was there was the potential to affect the lives of patients, and that potential was realized by Pfizer spending its own money.
And not just Pfizer. Let's not forget that the NIH entered into research agreements with many other companies, and that the list of JAK3-related drug discovery projects is a long one. And keep in mind that not all of them, by any means, have ever earned a nickel for the companies involved, and that many of them never will. As for Pfizer, Xeljanz has been on the market for less than six months, so it's too early to say how the drug will do. But it's not a license to print money, and is in a large, extremely competitive market. And should it run into trouble (which I certainly hope doesn't happen), I doubt if Senator Wyden will be writing letters seeking to share some of the expenses.
You'll have heard about Yuri Milner, the Russian entrepreneur (early Facebook investor, etc.) who's recently announced some rather generous research prize awards:
Yesterday, Milner, along with some “old friends”—Google cofounder Sergey Brin, Facebook CEO Mark Zuckerberg, and their respective wives—announced they are giving $33 million in prizes to 11 university-based biologists. Five of the awards, called the Breakthrough Prize in Life Sciences, will be given annually going forward; they are similar to prizes for fundamental physics that Milner started giving out last year.
At $3 million apiece, the prize money tops the Nobels, whose purse is around $1 million. Yet neither amount is much compared to what you can make if you drop out of science and find a calling in Silicon Valley, as Brin, Milner, Zuckerberg did.
Technology Review has a good article on the whole effort. After looking over the awardees, Antonio Regalado has some speculation:
But looking over the list (the New York Times published it along with some useful biographical details here), I noticed some very strong similarities between the award winners. Nearly all are involved in studying cancer genetics or cancer stem cells, and sometimes both.
In other words, this isn’t any old list of researchers. It’s actually the scientific advisory board of Cure for Cancer, Inc. Because lately, DNA sequencing and better understanding of stem cells have become the technologies that look most likely to maybe, just maybe, point toward some real cancer cures.
Wouldn't surprise me. This is a perfectly good area of research for targeted funding, and a good infusion of cash is bound to help move things along. The article stops short of saying that Milner (or someone he knows) might have a personal stake in all this, but that wouldn't be the first time that situation has influenced the direction of research, either. I'm fine with that, actually - people have a right to do what they want to with their own money, and this sort of thing is orders of magnitude more useful than taking the equivalent pile of money and buying beachfront mansions with it. (Or a single beachfront mansion, come to think of it, depending on what market we're talking about).
I've actually been very interested in seeing how some of the technology billionaires have been spending their money. Elon Musk, Jeff Bezos, David Page, Sergey Brin, etc., have been putting some money behind some very unusual ventures, and I'm very happy to see them do it. If I were swimming in that kind of cash, I'd probably be bankrolling my own space program or something, too. Of course, those sorts of ideas are meant to eventually turn a profit. In that space example, you have tourism, launch services, asteroid mining, orbiting solar power, and a lot of other stuff familiar to anyone who ever read an old John W. Campbell editorial.
What about the biopharma side? You can try to invest to make money there, but it's worth noting that not a lot of tech-era money has gone into venture capital in this area. Are we going to see more of it going as grants to academia? If so, that says something about the state of the field, doesn't it? Perhaps the thinking is that there's still so much basic science to be learned that you get more for your dollar investing in early research - at least, it could lead to something that's a more compelling venture. And I'd be hard pressed to argue.
Chemistry World has really touched a lot of nerves with this editorial by economics professor Paula Stephan. It starts off with a look back to the beginnings of the NIH and NSF, Vannevar Bush's "Endless Frontier":
. . .a goal of government and, indirectly, universities and medical schools, was to build research capacity by training new researchers. It was also to conduct research. However, it was never Bush’s vision that training be married to research. . .
. . .It did not take long, however, for this to change. Faculty quickly learned to include graduate students and postdocs on grant proposals, and by the late 1960s PhD training, at least in certain fields, had become less about capacity building and more about the need to staff labs.
Staff them we have, and as Prof. Stephen points out, the resemblence to a pyramid scheme is uncomfortable. The whole thing can keep going as long as enough jobs exist, but if that ever tightens up, well. . .have a look around. Why do chemists-in-training (and other scientists) put up with the state of affairs?
Are students blind or ignorant to what awaits them? Several factors allow the system to continue. First, there has, at least until recently, been a ready supply of funds to support graduate students as research assistants. Second, factors other than money play a role in determining who chooses to become a scientist, and one factor in particular is a taste for science, an interest in finding things out. So dangle stipends and the prospect of a research career in front of star students who enjoy solving puzzles and it is not surprising that some keep right on coming, discounting the all-too-muted signals that all is not well on the job front. Overconfidence also plays a role: students in science persistently see themselves as better than the average student in their program – something that is statistically impossible.
I don't think the job signals are particularly muted, myself. What we do have are a lot of people who are interested in scientific research, would like to make careers of it, and find themselves having to go through the system as it is because there's no other one to go through.
Stephan's biggest recommendation is to try to decouple research from training: the best training is to do research, but you can do research without training new people all the time. This would require more permanent staff, as opposed to a steady stream of new students, and that's a proposal that's come up before. But even if we decide that this is what's needed, where are the incentives to do it? You'd have to go back to the source of the money, naturally, and fund people differently. Until something's done at that level, I don't see much change coming, in any direction.
There's a new Viewpoint piece out in ACS Medicinal Chemistry Letters on academia and drug discovery. Donna Huryn of Pittsburgh is wondering about the wisdom of trying to reproduce a drug-company environment inside a university:
However, rather than asking how a university can mimic a drug discovery company, perhaps a better question is what unique features inherent in an academic setting can be taken advantage of, embellished, and fostered to promote drug discovery and encourage success? Rather than duplicating efforts already ongoing in commercial organizations, a university has an opportunity to offer unique, yet complementary, capabilities and an environment that fosters drug discovery that could generate innovative therapies, all the while adhering to its educational mission.
A corollary to this question is the converse—what aspects of drug discovery efforts within a university might be inconsistent with its primary goal of education and research, and can solutions be found to allow success in both?
Her take is that a university should take advantage of whatever special expertise its faculty have in particular areas of biology, pharmacology, etc., which could give it an advantage compared with the staff of a given pharma company. This isn't always easy, though, for cultural reasons:
While it seems that a university should have the tools to make significant contributions to drug discovery by taking advantage of the resident expertise, a cultural change might be required to foster an environment that values the teamwork required to make these efforts successful. Certainly funding agencies are moving in this direction with the establishment of multi-Principal Investigator designations that are designed to “maximize the potential of team science efforts”. Additionally, internal grants offered by academic institutions often insist that the proposed research involve multiple disciplines, departments, or even schools within the University. However, it seems that a concerted effort to “match-make” scientists with complementary expertise and an interest in drug discovery, finding ways to reward collaborative research efforts, and even, perhaps, establishing a project management-type infrastructure would facilitate a university-based drug discovery program.
She also makes the case the universities should use their ability to pursue higher-risk projects, given that they're not beholden to investors. I couldn't agree more - in fact, I think that's one of their biggest strengths. I'd define "high-risk" (by commercial standards) as any combination of (1) unusual mechanism of action, (2) little-understood disease area, (3) atypical chemical matter, and (4) a need for completely new assay technology. If you try to do all of those at once, you're going to land on your face, most likely. But some pharma companies don't even like to hear about one out of the four, and two out of four is going to be a hard sell.
And I think Huryn's broader point is well taken: we already have drug companies, so trying to make more of them inside universities seems like a waste of time and money. We need as many different approaches as we can get.
So here's a question that a lot of people around here will have strong opinions on. I've heard from someone in an academic group that's looking into doing some high-throughput screening. As they put it, they don't want to end up as "one of those groups", so they're looking for advice on how to get into this sensibly.
I applaud that; I think it's an excellent idea to look over the potential pitfalls before you hop into an area like this. My first advice would be to think carefully about why you're doing the screening. Are you looking for tool compounds? Do they need to get into cells? Are you thinking of following up with in vivo experiments? Are you (God help you) looking for potential drug candidates? Each of these require somewhat different views of the world.
No matter what, I'd say that you should curate the sorts of structures that you're letting in. Consider the literature on frequent-hitter structures (here's a good starting point, blogged here), and decide how much you want to get hits versus being able to follow up on them. I'd also say to keep in mind the Shoichet work on aggregators (most recently blogged here), especially the lesson that these have to be dealt with assay-by-assay. Compounds that behave normally in one system can be trouble in others - make no assumptions.
But there's a lot more to say about this. What would all of you recommend?
My post the other day on a very unattractive screening hit/tool compound prompted a reader to mention this paper. It's one from industry this time (AstraZeneca), and at first it looks like similarly foul chemical matter. But I think it's worth a closer look, to see how they dealt with what they'd been given by screening.
This team was looking for hits against PIM kinases, and the compound shown was a 160nM hit from high-throughput screening. That's hard to ignore, but on the other hand, it's another one of those structures that tell you that you have work to do. It's actually quite similar to the hit from the previous post - similar heterocycle, alkylidene branching to a polyphenol.
So why am I happier reading this paper than the previous one? For one, this structure does have a small leg up, because this thiazolidinedione heterocycle doesn't have a thioamide in it, and it's actually been in drugs that have been used in humans. TZDs are certainly not my first choice, but they're not at the bottom of the list, either. On the other hand, I can't think of a situation where a thioamide shouldn't set off the warning bells, and not just for a compound's chances of becoming a drug. The chances of becoming a useful tool compound are lower, too, for the same reasons (potential reactivity / lack of selectivity). Note that these compounds are fragment-sized, unlike the diepoxide we were talking about the other day, which means that they're likely to be able to fit into more binding sites.
But there's still that aromatic ring. In this case, though, the very first thing this paper says after stating that they decided to pursue this scaffold is: "We were interested to determine whether or not we could remove the phenol from the series, as phenols often give poor pharmacokinetic and drug-like properties.". And that's what they set about doing, making a whole series of substituted aryls with less troublesome groups on them. Basic amines branching off from the ortho position led to very good potency, as it turned out, and they were able to ditch the phenol/catechol functionality completely while getting well into (or below) single-digit nanomolar potency. With these compounds, they also did something else important: they tested the lead structures against a panel of over four hundred other kinases to get an idea of their selectivity. These is just the sort of treatment that I think the Tdp-1 inhibitor from the Minnesota/NIH group needs.
To be fair, that other paper did show a number of attempts to get rid of the thioamide head group (all unsuccessful), and they did try a wide range of aryl substituents (the polyphenols were by far the most potent). And it's not like the Minnesota/NIH group was trying to produce a clinical candidate; they're not a drug company. A good tool compound to figure out what selective Tdp-1 inhibition does is what they were after, and it's a worthy goal (there's a lot of unknown biology there). If that had been a drug company effort, those two SAR trends taken together would have been enough to kill the chemical series (for any use) in most departments. But even the brave groups who might want to take it further would have immediately profiled their best chemical matter in as many assays as possible. Nasty functional groups and lack of selectivity would surely have doomed the series anywhere.
And it would doom it as a tool compound as well. Tool compounds don't have to have good whole-animal PK, and they don't have to be scalable to pilot plant equipment, and they don't have to be checked for hERG and all the other in vivo tox screens. But they do have to be selective - otherwise, how do you interpret their results in an assay? The whole-cell extract work that the group reported is an important first step to address that issue, but it's just barely the beginning. And I think that sums up my thoughts when I saw the paper: if it had been titled "A Problematic Possible Tool Compound for Tdp-1", I would have applauded it for its accuracy.
The authors say that they're working on some of these exact questions, and I look forward to seeing what comes out of that work. I'd have probably liked it better if that had been part of the original manuscript, but we'll see how it goes.
I wrote here about the Cronin lab at Glasgow and their work on using 3-D printing technology to make small chemical reactors. Now there's an article on this research in the Observer that's getting some press attention (several people have e-mailed it to me). Unfortunately, the headline gets across the tone of the whole piece: "The 'Chemputer' That Could Print Out Any Drug".
To be fair, this was a team effort. As the reporter notes, Prof. Cronin "has a gift for extrapolation", and that seems to be a fair statement. I think that such gifts have to be watched carefully in the presence of journalists, though. The whole story is a mixture of wonderful-things-coming-soon! and still-early-days-lots-of-work-to-be-done, and these two ingredients keep trying to separate and form different layers:
So far Cronin's lab has been creating quite straightforward reaction chambers, and simple three-step sequences of reactions to "print" inorganic molecules. The next stage, also successfully demonstrated, and where things start to get interesting, is the ability to "print" catalysts into the walls of the reactionware. Much further down the line – Cronin has a gift for extrapolation – he envisages far more complex reactor environments, which would enable chemistry to be done "in the presence of a liver cell that has cancer, or a newly identified superbug", with all the implications that might have for drug research.
In the shorter term, his team is looking at ways in which relatively simple drugs – ibuprofen is the example they are using – might be successfully produced in their 3D printer or portable "chemputer". If that principle can be established, then the possibilities suddenly seem endless. "Imagine your printer like a refrigerator that is full of all the ingredients you might require to make any dish in Jamie Oliver's new book," Cronin says. "Jamie has made all those recipes in his own kitchen and validated them. If you apply that idea to making drugs, you have all your ingredients and you follow a recipe that a drug company gives you. They will have validated that recipe in their lab. And when you have downloaded it and enabled the printer to read the software it will work. The value is in the recipe, not in the manufacture. It is an app, essentially."
What would this mean? Well for a start it would potentially democratise complex chemistry, and allow drugs not only to be distributed anywhere in the world but created at the point of need. It could reverse the trend, Cronin suggests, for ineffective counterfeit drugs (often anti-malarials or anti-retrovirals) that have flooded some markets in the developing world, by offering a cheap medicine-making platform that could validate a drug made according to the pharmaceutical company's "software". Crucially, it would potentially enable a greater range of drugs to be produced. "There are loads of drugs out there that aren't available," Cronin says, "because the population that needs them is not big enough, or not rich enough. This model changes that economy of scale; it could makes any drug cost effective."
Not surprisingly Cronin is excited by these prospects, though he continually adds the caveat that they are still essentially at the "science fiction" stage of this process. . .
Unfortunately, "science fiction" isn't necessarily a "stage" in some implied process. Sometimes things just stay fictional. Cronin's ideas are not crazy, but there are a lot of details between here and there, and if you don't know much organic chemistry (as many of the readers of the original article won't), then you probably won't realize how much work remains to be done. Here's just a bit; many readers of this blog will have thought of these and more:
First, you have to get a process worked out for each of these compounds, which will require quite a bit of experimentation. Not all reagents and solvents are compatible with the silicone material that these microreactors are being fabricated from. Then you have to ask yourself, where do the reagents and raw materials come in? Printer cartridges full of acetic anhydride and the like? Is it better to have these shipped around and stored than it is to have the end product? In what form is the final drug produced? Does it drip out the end of the microreactor (and in what solvent?), or is a a smear on some solid matrix? Is it suitable for dosing? How do you know how much you've produced? How do you check purity from batch to batch - in other words, is there any way of knowing if something has gone wrong? What about medicines that need to be micronized, coated, or treated in the many other ways that pills are prepared for human use?
And those are just the practical considerations - some of them. Backing up to some of Prof. Cronin's earlier statements, what exactly are those "loads of drugs out there that aren't available because the population that needs them is not big enough, or not rich enough"? Those would be ones that haven't been discovered yet, because it's not like we in the industry have the shelves lined with compounds that work that we aren't doing anything with for some reason. (Lots of people seem to think that, though). Even if these microreactors turn out to be a good way to make compounds, though, making compounds has not been the rate-limiting step in discovering new drugs. I'd say that biological understanding is a bigger one, or (short of that), just having truly useful assays to find the compounds you really want.
Cronin has some speculations on that, too - he wonders about the possibility of having these microreactors in some sort of cellular or tissue environment, thus speeding up the whole synthesis/assay loop. That would be a good thing, but the number of steps that have to be filled in to get that to work is even larger than for the drug-manufacture-on-site idea. I think it's well worth working on - but I also think it's well worth keeping out of the newspapers just yet, too, until there's something more to report.
Partnerships between industry and academia, of course, aren’t new. Yet Pfizer, Sanofi, Merck & Co. (MRK) and other drug companies are putting a new twist on the arrangement by stepping up their level of collaboration with universities. In the case of Pfizer, the world’s largest drug company is embedding operations in Boston, San Francisco, New York and San Diego, often in the very same buildings where famed academic institutions have labs.
“No matter how much money you have, nothing compares to the innovation going on out in the world,” said Jose Carlos Gutierrez-Ramos, the director of the [new Pfizer lab in Cambridge], in an interview. “We want to be here, integrated into this fabric.”
Right. As I said earlier, I can definitely see the benefit to putting your research center in Cambridge or South San Francisco as opposed to Duluth or Reno. There are a lot of qualified people in the area who might be interested in moving over to join you, for one thing, and for small companies, that's where the (knowledgeable) money tends to hang out. But I still wonder about this cozy-up-to-the-academic-luminaries approach. Pfizer, for example, is making a big deal out of collaborating with Harvard, and their vision of how this is going to work doesn't quite fit into reality as I've come to know it:
Gutierrez-Ramos said he is trying to create an atmosphere at the lab where outside researchers easily come and go, and Pfizer’s scientists visit neighboring academicians on their turf.
Pharmaceutical companies, which historically are highly secretive about their work because of competition, need to be willing to take more risks in the future, he said, creating access to its inner sanctums to develop drugs earlier.
What Pfizer offers academic researchers are “extraordinary” resources for drug development that nearby university labs can’t match, said Harvard’s [Hal] Dvorak.
The problem with all this my-lab-is-your-lab stuff is that money gets involved. Don't think Harvard doesn't appreciate that, either - anyone who imagines a big pharma company snookering the unworldly Harvard Square luftmenschen should go try to do a deal with the university's technology transfer people. Undervaluing the worth of its own research is not one of Harvard's problems. And matters of intellectual property get involved, too pesky little matters that lead to Jarndyce v. Jarndyce style lawsuits. No, I have trouble imagining people breezing in and out of each other's labs like some sort of drug-discovery effort set in the Seinfeld universe.
What's interesting is that stories like the one I've linked to say that the drug companies are doing this because money is tight, and they need new revenue streams - thus the collaborations. And the universities are doing it because money is tight, and they need new revenue streams. The only way money is going to come out of these deals in order to fulfill both those expectations is for new drugs to be discovered and marketed, and that's a ten-to-fifteen year process. For now, the money is flowing from the drug industry towards academia.
Let's hope that the success rate of the targets improves. Don't get me wrong - I think that collaborations with academia can be useful, and I'm all for both groups getting to understand each other more. But I wonder if people are building expectations up a bit too much, too soon.
I gave my talk at the Drew University Medicinal Chemistry course, and it got me to thinking about when I was there (1990 or 1991), and my early days in medicinal chemistry in general. There are a lot of things that have to be learned when coming out of a synthetic organic chemistry background, and a few that have to be unlearned. I've written about some of these in the past, but I wanted to bring together some specific examples:
1. I had to appreciate just how strange and powerful metabolizing enzymes are. I approached them from the standpoint of an organic chemist, but p450 enzymes can epoxidize benzene, and I don't know any organic chemists that can do that too well. Ripping open piperazine rings, turning cyclohexanes into cyclohexanols - there are a lot of reactions that are common in metabolic clearance that are not, to put it lightly, part of the repetoire of synthetic organic chemistry.
2. I also had to learn a rough version of the Lipinski rules - basically, that physical properties matter, although the degree to which they matter can vary. You can't increase molecular weight or lipophilicity forever without paying for it. Small polar molecules are handled fundamentally differently than big greasy ones in vivo. This was part of learning that there are many, many different potential fates for small molecules when dosed into a living animal.
3. Another key realization, which took a while to sink in, was that biological assays had error bars, and that this was true whether or not error bars were presented on the page or the screen. Enzyme assays were a bit fuzzy compared to the numbers I was used to as a chemist, but cell assays were fuzzier. And whole-animal numbers covered an even wider range. I had to understand that this hierarchy was the general rule, and that there was not a lot to be done about it in most cases (except, importantly, to never forget that it was there).
4. As someone mentioned in the comments here the other day, alluding to an old post of mine, I had to learn that although I'd been hearing for years that time was money, that grad school had been a poor preparation for learning how true that was. I was used to making everything that I could rather than buying it, but I had to reverse that thinking completely, since I was being paid to use my head more than my hands. (That didn't mean that I shouldn't use my hands, far from it - only that I should use my head first whenever feasible).
5. I also had to figure out how to use my time more efficiently. Another bad grad school habit was the working all hours of the day routine, which tended to make things stretch out. Back then, if I didn't get that reaction set up in the afternoon, well, I was coming back that evening, so I could do it then. But if I was going to keep more regular working hours, I had to plan things out better to make the best use of my time.
6. There were several big lessons to be learned about where chemistry fit into the whole drug discovery effort. One was that if I made dirty compounds, only dirty results could be expected from them. As mentioned above, even clean submissions gave alarmingly variable results sometimes; what could be expected from compounds with large and variable impurities from prep to prep? One of my jobs was not to make things harder than they already were.
7. A second big lesson, perhaps the biggest, was that chemistry was (and is) a means to an end in drug discovery. The end, of course, is a compound that's therapeutically useful enough that people are willing to pay money for it. Without one or more of those, you are sunk. It follows, first, that anything that does not bear on the problems of producing them has to be considered secondary - not unimportant, perhaps, but secondary to the biggest issue. Without enough compounds to sell, everything else that might look so pressing will, in fact, go away - as will you.
8. The next corollary is that while synthetic organic chemistry is a very useful way to produce such compounds, it is not necessarily the only way. Biologics are an immediate exception, of course, but there are more subtle ones. One of the trickier lessons a new medicinal chemist has to learn is that the enzymes and receptors, the cells and the rats, none of them are impressed by your chemical skills and your knowledge of the literature. They do not care if the latest compound was made by the most elegant application of the latest synthetic art, or by the nastiest low-yielding grunt reaction. What matters is how good that compound might be as a drug candidate, and the chemistry used to make it usually (and should) get in line behind many more important considerations. "Quickly", "easily", and "reproducibly", in this business, roughly elbow aside the more academic chemical virtues of "complexly", "unusually", and "with difficulty".
I do hate to bring up rhodanines again, but I'm not the one who keeps making the things. This paper from ACS Medicinal Chemistry Letters turns out dozens of the things as potential inhibitors of the cellular protein dynamin, in what a colleague of mine referred to as a "nice exploration of the rhodanome".
He did not say it with a straight face. But this paper does: "The rhodanine core is a privileged scaffold in medicinal chemistry and one that has found promise among many therapeutic applications." Well, that's one way to look at it. Another viewpoint is that rhodanines are "polluting the scientific literature" and that they should "be considered very critically" no matter what activity they show in your assay.
The usual answer to this is that these aren't drugs, they're tool compounds. But I don't think that these structures even make safe tools; they have the potential to do too many other things in cell assays. But if people are going to go ahead and use them, I wish that they'd at least make a nod in that direction, instead of mentioning, in passing, how great the whole class is. And yes, I know that they cite two papers to that effect, but one of those two mainly just references the other one when it comes to rhodanines. My viewpoint is more like this paper's:
Academic drug discovery is being accompanied by a plethora of publications that report screening hits as good starting points for drug discovery or as useful tool compounds, whereas in many cases this is not so. These compounds may be protein-reactive but can also interfere in bioassays via a number of other means, and it can be very hard to prove early on that they represent false starts. . .
. . .Barriers to adoption of best practices for some academic drug-discovery researchers include knowledge gaps and infrastructure deficiencies, but they also arise from fundamental differences in how academic research is structured and how success is measured. Academic drug discovery should not seek to become identical to commercial pharmaceutical research, but we can do a better job of assessing and communicating the true potential of the drug leads we publish, thereby reducing the wastage of resources on nonviable compounds.
Anonymity is a topic that comes up whenever you talk about commenting on published scientific work. Some people are very uncomfortable with the idea of others being able to take potshots at them from behind convenient rocks, while others think that without that ability, a lot of relevant discussion will never take place.
Similar concerns apply to academic research grants. A big name never hurts - but what if all the names were stripped off the proposals? Many people have wondered this over the years, but now the NSF has been giving it a try:
Known as The Big Pitch and launched 2 years ago by officials in the agency's Molecular and Cellular Biosciences (MCB) Division, the effort aims to find out if making proposals anonymous—and shorter—has an impact on how they fare in the review process. “We wanted to find ways to identify transformative ideas that are getting lost in the regular peer-review process,” says Parag Chitnis, head of the MCB division. “So we asked: What would happen if we strip off the name of the PI [principal investigator] and institution and distill proposals down to just the big question or the core idea?”
What happens is a lot, according to the first two rounds of the Big Pitch. NSF's grant reviewers who evaluated short, anonymized proposals picked a largely different set of projects to fund compared with those chosen by reviewers presented with standard, full-length versions of the same proposals.
They're tried this twice, in two different research areas, each time with some 50 to 60 proposals to work with. Both times, the full-proposal rankings were almost completely different than the anonymous-pitch ones. I can see some problems with drawing conclusions here, though: for one thing, if two different teams of evaluators look over the same set of proposals (in either format), how closely do they agree? I'd like to see the NSF try that experiment - say, three different panels rating each set. And I'd include a third group, the condensed proposals with the names still on them. That might help answer several questions: how much do such panels diverge in general? Is the spread larger or smaller with the condensed proposal format? With the names stripped off? How much of the difference in rating is due to each factor?
These ideas have occurred to the people involved, naturally:
The experiment was not designed to separate out the effect of anonymity, but it may have been a factor. In both Big Pitch rounds, reviewers evaluating the anonymous two-pagers were later told the identity of the applicants. In some cases, Chitnis says, panelists were surprised to learn that a highly rated two-pager had come from a researcher they had never heard of. In others, he notes, reviewers “thought they knew who this person is going to be” only to find that the application came from a former student of the presumed bigwig, working at a small institution.
In their next round, the NSF plans to try to sort some of these factors out. I very much hope that this sort of thing continues, though. There should be a mixture of funding mechanisms out there: programs that fund interesting people, no matter what they're working on, and ones that fund interesting ideas, no matter where they came from.
The NIH's attempt to repurpose shelved development compounds and other older drugs is underway:
The National Institutes of Health (NIH) today announced a new plan for boosting drug development: It has reached a deal with three major pharmaceutical companies to share abandoned experimental drugs with academic researchers so they can look for new uses. NIH is putting up $20 million for grants to study the drugs.
"The goal is simple: to see whether we can teach old drugs new tricks," said Health and Human Services Secretary Kathleen Sebelius at a press conference today that included officials from Pfizer, AstraZeneca, and Eli Lilly. These companies will give researchers access to two dozen compounds that passed through safety studies but didn't make it beyond mid-stage clinical trials. They shelved the drugs either because they didn't work well enough on the disease for which they were developed or because a business decision sidelined them.
There are plenty more where those came from, and I certainly wish people luck finding uses for them. But I've no idea what the chances for success might be. On the one hand, having a compound that's passed all the preclinical stages of development and has then been into humans is no small thing. On that ever-present other hand, though, randomly throwing these compounds against unrelated diseases is unlikely to give you anything (there aren't enough of them to do that). My best guess is that they have a shot in closely related disease fields - but then again, testing widely might show us that there are diseases that we didn't realized were related to each other.
Well, the NIH has recently expanded the remit of NCATS. NCATS will now be testing drugs that have been shelved by the pharmaceutical industry for other potential uses. The motivation for this is simple. They believe that these once promising but failed compounds could have other uses that the inventor companies haven’t yet identified. I’d like to reiterate the view of Dr. Vagelos – it’s fairy time again.
My views on this sort of initiative, which goes by a variety of names – “drug repurposing,” “drug repositioning,” “reusable drugs” – have been previously discussed in my blog. I do hope that people can have success in this type of work. But I believe successes are going to be rare.
The big question is, rare enough to count the money and time as wasted, or not? I guess we'll find out. Overall, I'd rather start with a compound that I know does what I want it to do, and then try to turn it into a drug (phenotypic screening). Starting with a compound that you know is a drug, but doesn't necessarily do what you want it to, is going to be tricky.
Here's a good example of phenotypic screening coming through with something interesting and worthwhile: they screened against Entamoeba histolytica, the protozooan that causes amoebic dysentery and kills tens of thousands of people every year. (Press coverage here).
It wasn't easy. The organism is an anaerobe, which is a bad fit for most robotic equipment, and engineering a decent readout for the assay wasn't straightforward, either. They did have a good positive control, though - the nitroimidazole drug metronidazole, which is the only agent approved currently against the parasite (and to which it's becoming resistant). A screen of nearly a thousand known drugs and bioactive compounds showed eleven hits, of which one (auranofin) was much more active than metronidazole itself.
Auranofin's an old arthritis drug. It's a believable result, because the compound has also been shown to have activity against trypanosomes, Leishmania parasites, and Plasmodium malaria parasites. This broad-spectrum activity makes some sense when you realize that the drug's main function is to serve as a delivery vehicle for elemental gold, whose activity in arthritis is well-documented but largely unexplained. (That activity is also the basis for persistent theories that arthritis may have an infectious-disease component).
The target in this case may well be arsenite-inducible RNA-associated protein (AIRAP), which was strongly induced by drug treatment. The paper notes that arsenite and auranofin are both known inhibitors of thioredoxin reductase, which strongly suggests that this is the mechanistic target here. The organism's anaerobic lifestyle fits in with that; this enzyme would presumably be its main (perhaps only) path for scavenging reactive oxygen species. It has a number of important cysteine residues, which are very plausible candidates for binding to a metal like gold. And sure enough, auranofin (and two analogs) are potent inhibitors of purified form of the amoeba enzyme.
The paper takes the story all the way to animal models, where auranofin completely outperforms metronidazole. The FDA has now given it orphan-drug status for amebiasis, and the way appears clear for a completely new therapeutic option in this disease. Congratulations to all involved; this is excellent work.
Mat Todd at the University of Sydney (whose open-source drug discovery work on schistosomiasis I wrote about here) has an interesting chemical suggestion. His lab is also involved in antimalarial work (here's an update, for those interested, and I hope to post about this effort more specifically). He's wondering about whether there's room for a "Molecular Craigslist" for efforts like these:
Imagine there is a group somewhere with expertise in making these kinds of compounds, and who might want to make some analogs as part of a student project, in return for collaboration and co-authorship? What about a Uni lab which might be interested in making these compounds as part of an undergrad lab course?
Wouldn’t it be good if we could post the structure of a molecule somewhere and have people bid on providing it? i.e. anyone can bid – commercial suppliers, donators, students?
Is there anything like this? Well, databases like Zinc and Pubchem can help in identifying commercial suppliers and papers/patents where groups have made related compounds, but there’s no tendering process where people can post molecules they want. Science Exchange has, I think, commercial suppliers, but not a facility to allow people to donate (I may be wrong), or people to volunteer to make compounds (rather than be listed as generic suppliers. Presumably the same goes for eMolecules, and Molport?
Is there a niche here for a light client that permits the process I’m talking about? Paste your Smiles, post the molecule, specifying a purpose (optional), timeframe, amount, type of analytical data needed, and let the bidding commence?
The closest thing I can think of is Innocentive, which might be pretty close to what he's talking about. It's reasonably chemistry-focused as well. Any thoughts out there?
A reader sends along this query, which I thought asked a very useful question:
". . .as a member of a growing biopharma company I am tasked with evaluating the effectiveness of industrial post-docs from both a business perspective and the post-doc's experience. Specifically, we are considering adding one for a short-term (2yr) to add headcount to a project. This adds resources without the long term commitment and also gives the scientists on site a chance for a paper they otherwise might not have time to work on. The candidate obviously gets a well-paid post-doc experience, and an industrial foot in the door. But, does this model work? I imagine that if it were that cut and dried you would see more of them."
Good point. Industrial post-docs are still relatively rare, although I've certainly seen a few. Come to think of it, though, those were mostly in biology, as opposed to chemistry. So, what do people think? From my end, I'd say that traditionally, companies have felt that temporary positions are best filled with experienced temporary employees, who presumably don't have to be trained as much. And if you're going to hire someone to learn the ropes, they might as well be good enough to be brought in as a full-time employee.
From the other end, an industrial post-doc has always been seen as less prestigious than an academic one, and there are some hiring managers who probably don't know what to think when one shows up on a c.v. There's often a feeling that if the person did a really good job during the post-doc that the company would have tried to offer them something permanent. And since they didn't, well. . .
Even so, it does seem as if there are situations where an industrial post-doc could be a good fit, and in today's job market, anything looks good. Anyone out there experienced this, from either end?
Inspired by a discussion with a colleague, I'm going to take one more crack at the recent discussion here about theJ. Med. Chem. DHFR paper. Those of you with an interest in the topic, read on. Those whose interest has waned, or who never had much interest to start with, take heart: other topics are coming.
It's clear that many people were disappointed with my take on this paper, and my handling of the whole issue. Let me state again that I mishandled the biology aspects of this one thoroughly, through carelessness, and I definitely owe this apology to the authors of the paper (and the readers of this site) for that.
Of course, that's not the only arguable thing about the way I handled this one. As I spent paragraphs rambling on about in yesterday's post, there's a chemical aspect to the whole issue as well, and that's what caught my eye to start with. I think one of the things that got me into trouble with this one is two different ways of looking at the world. I'll explain what I mean, and you can judge for yourself if I'm making any sense.
The authors of the paper (and its reviewer who commented here) are interested in D67 dihydrofolate reductase, from a biological/enzymological perspective. From this viewpoint - and it's a perfectly tenable one - the important thing is that D67 DHFR is an unusual and important enzyme, a problem in bacterial resistance, interesting in its own right as a protein with an odd binding site, and for all that, still has no known selective inhibitors. Anything that advances the understanding of the enzyme and points toward a useful inhibitor of it is therefore a good thing, and worth publishing in J. Med. Chem., too.
I come in from a different angle. As someone who's done fragment-based drug discovery and takes a professional interest in it, I'll take a look at any new paper using the technique. In this case, I gave the target much too cursory a look, and filed it as "DHFR, bacterial enzyme, soluble, X-ray structures known". In other words, a perfectly reasonable candidate for FBDD as we know it. Once I'd decided that this was a mainstream application of something I already have experience with, I turned my attention to how the fragment work was done. By doing so, I missed out on the significance of the DHFR enzyme, which means, to people in the first camp, that I whiffed on the most important part of the entire thing. I can understand their frustration as I brushed that off like a small detail and went on to what (to them) were secondary matters.
But here's where my view of the world comes in. As a drug discovery guy, when I read a paper in J. Med. Chem., I'd like to see progress in, well, the medicinal chemistry of the topic. That was the thrust of my blog post yesterday: that I found the med-chem parts of the paper uncompelling, and that the application of fragment-based techniques seemed to me to have gone completely off track. (I havne't mentioned the modeling and X-ray aspects of the paper, as Teddy Z did at Practical Fragments, but I also found those parts adding nothing to the worth of the manuscript as a whoel). The most potent compounds in the paper seem, to me, to be the sort that are very unlikely to lead to anything, and are unlikely to show selectivity in a cellular environment. If the paper's starting fragment hits are real (which is not something that's necessarily been proven, as I mentioned in yesterday's post), then it seems to me that everything interesting and useful about them is being thrown away as the paper goes on. From the other point of view, things are basically the opposite - the paper gets better and better as the compounds get more potent.
But here's where, perhaps, the two viewpoints I spoke of earlier might find something in common. If you believe that the important thing is that selective inhibitors of D67 DHFR have finally been discovered, then you should want these to be as potent and selective as possible, and as useful as possible in a variety of assays. This, I think, is what's in danger of being missed. I think that a fragment-based effort should have been able to deliver much more potent chemical matter than these compounds, with less problematic structures, which are more likely to be useful as tools.
I'll finish up by illustrating the different angles as starkly as I can. The authors of this paper have, in one view of the world, completed the first-ever fragment screen against an important enzyme, discovered the first-ever selective inhibitors of it, and have published these results in a prestigious journal: a success by any standard. From my end, if I were to lead a drug discovery team against the same enzyme, I might well see the same fragment hits the authors did, since I know that some of these are in the collections I use. But if I proceeded in the same fashion they did, prosecuting these hit compounds in the same way, I would, to be completely honest about it, face some very harsh questioning. And if I persevered in the same fashion, came up with the same final compounds, and presented them as the results of my team's work, I would run the serious risk of being fired. Different worlds.
Update: Prof. Pelletier sends the following:
I certainly have been following this with interest, and learning much from it – not just science.
Throughout the week, I have appreciated your civil tone – many thanks. I willingly accept your apology, just as I accept the constructive criticism that will improve our future work. I think your ‘two-worlds’ point of view smacks of truth. The bottom line from my point of view is that I’m open to collaboration with a real fragment library: if anyone is interested in making this better, they should contact me. I’d be delighted to work with more than what can be scavenged from neighbouring labs in an academic setting.
Your bloggers’ response to this come-and-go was fascinating: the process was admired to an extent that surprised me. A number of responders point out that there are currently few occurrences of open exchange on these blogs and – sorry to disappoint hard-core bloggers – it does not endear me to the blogging process. I don’t blog because I can’t stand anonymous, frequently disrespectful and sometimes poorly researched comments. I nonetheless hope that this will open the door to a more transparent blogging process in the long run.
For any who care, I am brave, not at all desperate, and definitely a woman. ; )
If you feel any of this would be of interest for your blog, please feel free to post. Thanks for seeing this through rather than shaking it off.
We've talked about the NIH's Molecular Libraries Initiative here a few times, mostly in the context of whether it reached its goals, and what might happen now that it looks as if it might go away completely. Does make this item a little surprising?
Almost a decade ago, the US National Institutes of Health kicked off its Molecular Libraries Initiative to provide academic researchers with access to the high-throughput screening tools needed to identify new therapeutic compounds. Europe now seems keen on catching up.
Last month, the Innovative Medicines Initiative (IMI), a €2 billion ($2.6 billion) Brussels-based partnership between the European Commission and the European Federation of Pharmaceutical Industries and Associations (EFPIA), invited proposals to build a molecular screening facility for drug discovery in Europe that will combine the inquisitiveness of academic scientists with industry know-how. The IMI's call for tenders says the facility will counter “fragmentation” between these sectors.
I can definitely see the worth in that part of the initiative. Done properly, Screening Is Good. But they'll have to work carefully to make sure that their compound collection is worth screening, and to format the assays so that the results are worth looking at. Both those processes (library generation and high-throughput screening) are susceptible (are they ever) to "garbage in, garbage out" factors, and it's easy to kid yourself into thinking that you're doing something worthwhile just because you're staying so busy and you have so many compounds.
There's another part of this announcement that worries me a bit, though. Try this on for size:
Major pharmaceutical companies have more experience with high-throughput screening than do most academic institutes. Yet companies often limit tests of their closely held candidate chemicals to a fraction of potential disease targets. By pooling chemical libraries and screening against a more diverse set of targets—and identifying more molecular interactions—both academics and pharmaceutical companies stand to gain, says Hugh Laverty, an IMI project manager.
Well, sure, as I said above, Screening Is Good, when it's done right, and we do indeed stand to learn things we didn't know before. But is it really true that we in the industry only look at a "fraction of potential disease targets"? This sounds like someone who's keen to go after a lot of the tough ones; the protein-protein interactions, protein-nucleic acid interactions, and even further afield. Actually, I'd encourage these people to go for it - but with eyes open and brain engaged. The reason that we don't screen against such things as often is that hit rates tend to be very, very low, and even those are full of false positives and noise. In fact, for many of these things, "very, very low" is not distinguishable from "zero". Of course, in theory you just need one good hit, which is why I'm still encouraging people to take a crack. But you should do so knowing the odds, and be ready to give your results some serious scrutiny. If you think that there must be thousands of great things out there that the drug companies are just too lazy (or blinded by the thought of quick profits elsewhere) to pursue, you're not thinking this through well enough.
You might say that what these efforts are looking for are tool compounds, not drug candidates. And I think that's fine; tool compounds are valuable. But if you read that news link in the first paragraph, you'll see that they're already talking about how to manage milestone payments and the like. That makes me think that someone, at any rate, is imagining finding valuable drug candidates from this effort. The problem with that is that if you're screening all the thousands of drug targets that the companies are ignoring, you're by definition working with targets that aren't very validated. So any hits that you do find (and there may not be many, as said above) will still be against something that has a lot of work yet to be done on it. It's a bit early to be wondering how to distribute the cash rewards.
And if you're screening against validated targets, the set of those that don't have any good chemical matter against them already is smaller (and it's smaller for a reason). It's not that there aren't any, though: I'd nominate PTP1B as a well-defined enzymatic target that's just waiting for a good inhibitor to come along to see if it performs as well in humans as it does in, say, knockout mice. (It's both a metabolic target and a potential cancer target as well). Various compounds have been advanced over the years, but it's safe to say that they've been (for the most part) quite ugly and not as selective as they could have been. People are still whacking away at the target.
So any insight into decent-looking selective phosphatase inhibitors would be most welcome. And most unlikely, damn it all, but all great drug ideas are most unlikely. The people putting this initiative together will have a lot to balance.
So the news is that Merck is now going to start its own nonprofit drug research institute in San Diego: CALIBR, the California Institute for Biomedical Research. It'll be run by Peter Schultz of Scripps, and they're planning to hire about 150 scientists (which is good news, anyway, since the biomedical employment picture out in the San Diego area has been grim).
Unlike the Centers for Therapeutic Innovation that Pfizer, a pharmaceutical company based in New York, has established in collaboration with specific academic medical centres around the country, Calibr will not be associated with any particular institution. (Schultz, however, will remain at Scripps.) Instead, academics from around the world can submit research proposals, which will then be reviewed by a scientific advisory board, says Kim. The institute itself will be overseen by a board of directors that includes venture capitalists. Calibr will not have a specific therapeutic focus.
Merck, meanwhile, will have the option of an exclusive licence on any proteins or small-molecule therapeutics to emerge. . .
They're putting up $90 million over the next 7 years, which isn't a huge amount. It's not clear if they have any other sources of funding - they say that they'll "access" such, but I have to wonder, since that would presumably complicate the IP for Merck. It's also not clear what they'll be working on out there; the press release is, well, a press release. The general thrust is translational research, a roomy category, and they'll be taking proposals from academic labs who would like to use their facilities and expertise.
So is this mainly a way for Merck to do more academic collaborations without the possible complications (for universities) of dealing directly with a drug company? Will it preferentially take on high-risk, high-reward projects? There's too little to go on yet. Worth watching with interest as it gets going - and if any readers find themselves interviewing there, please report back!
I have a reader who's in the process of moving from an industrial setting to teaching medicinal chemistry. He wanted to know if I'd ever written about that topic, and I have to say, I don't think there's been a post dedicated to it yet. I know that many people have done just this (and there are many more who are thinking about it).
So let's talk - are there are others out there who've made the switch? What are some of the things to look out for? I know that this answer will vary, depending on the job and the type of academia, but it'll be worthwhile hearing some first-hand experiences. Anything from dealing with funding, to integrating your industry experience into your teaching, to the whole culture shift - comment away, and thanks!
I last wrote about the Molecular Libraries program here, as it was threatened with funding cuts. Now there's a good roundup of opinion on it here, at the SLAS. The author has looked over the thoughts of the readership here, and also heard from several other relevant figures. Chris Lipinski echoes what several commenters here had to say:
Lipinski notes that when the screening library collection began the NIH had little medicinal chemistry experience. "I was a member of an early teleconference to discuss what types of compounds should be acquired by the NIH for high-throughput screening (HTS) to discover chemical biology tools and probes. Our teleconference group was about evenly split between industry people and academics. The academics talked about innovation, thinking out of the box, maximum chemical diversity and not being limited by preconceived rules and filters. The industry people talked about pragmatism, the lessons learned and about worthless compounds that could appear active in HTS screens. The NIH was faced with two irreconcilable viewpoints. They had to pick one and they chose the academic viewpoint."
He says that they later moved away from this, with more success, but implies that quite a bit of time was lost before this happened. Now, we waste plenty of time and money in the drug industry, so I have no standing to get upset with the NIH about blind alleys, in principle. But having them waste time and money specifically on something that the drug industry could have warned them off of is another thing.
In the end, opinions divide (pretty much as you'd guess) on the worth of the whole initiative. As that link shows, its director believes it to have been a great success, while others give it more mixed reviews. Its worth has surely grown with time, though, as some earlier mistakes were corrected, and that's what seems to be worrying people: that the plug is getting pulled just when things were becoming more useful. It seems certain that several of the screening centers will not survive in the current funding environment. And what happens to their compounds then?
Here's the streaming video of the session I did at SLAS2012 on collaboration between academia and industry. I'm not sure how long it'll be up, so if you want to see it, you probably should go ahead and check it out. A lot of people probably wish they could fast-forward (and pause) me during regular working hours!
This is not the sort of academic-industry interaction I had in mind. There's a gigantic lawsuit underway between Agios and the Abramson Institute at the University of Pennsylvania, alleging intellectual property theft. There are plenty more details at PatentBaristas:
According to the complaint filed in the US District Court Southern District Of New York, the Institute was created by an agreement between The Abramson Family Foundation and the Trustees of the University of Pennsylvania. The Foundation donated over $110 Million Dollars to the Institute with the condition that the money was to be used to explore new and different approaches to cancer treatment.
Dr. Thompson later created a for-profit corporation that he concealed from the Institute. After a name change, that entity became the Defendant Agios Pharmaceuticals, Inc. Dr. Thompson did not disclose to the Institute that at least $261 million had been obtained by Agios for what was described as its “innovative cancer metabolism research platform” – i.e., the description of Dr. Thompson’s work at the Institute. Dr. Thompson did not disclose that Agios was going to sell to Celgene Corporation an exclusive option to develop any drugs resulting from the cancer metabolism research platform.
Three people with knowledge of Dr. Thompson’s version of events, two of whom would speak only on condition of anonymity because of the litigation, said that the University of Pennsylvania knew about Dr. Thompson’s involvement with Agios and even discussed licensing patents to the company, though no agreement was reached.
“When you start a company like this, you want to try to dominate the field,” said Lewis C. Cantley, another founder of Agios and the director of the cancer center at the Beth Israel Deaconess Medical Center in Boston. “The goal was to get as many patents as possible, and it was frustrating that we weren’t able to get any from Penn.”
Michael J. Cleare, executive director of Penn’s Center for Technology Transfer, declined to discuss whether negotiations had been held but said, “Yes, Penn knew about Agios.”
So, as the lawyers over at PatentBaristas correctly note, this is all going to come down to what happened when. And that's going to be determined during the discovery process - emails, meeting minutes, memos, text messages, whatever can establish who told what to whom. If there's something definitive, the whole case could end up being dismissed (or settled) before anything close to a trial occurs - in fact, that would be my bet. But that's assuming that something definite was transferred at all:
A crucial question, some patent law and technology transfer specialists said, could be whether Dr. Thompson provided patented technology to Agios or merely insights.
“If somebody goes out and forms a company and doesn’t take patented intellectual property — only brings knowledge, know-how, that sort of thing — we wouldn’t make any claims to it,” said Lita Nelsen, director of the technology licensing office at the Massachusetts Institute of Technology.
In its complaint, the Abramson institute does not cite any specific patents. It says Penn did not pursue the matter because Dr. Thompson had told the university that his role in Agios did not involve anything subject to the university’s patent policies. The lawsuit says the institute did not find out about Dr. Thompson’s role in Agios until late 2011.
There will probably be room to argue about what was transferred, which could get expensive. That accusation of not finding out about Agios until 2011, though, can't be right, since he's mentioned all over their press releases and meeting presentations at least two years before that. But no matter how this comes out, this is not the way to build trust. Not quite.
So, what questions should be asked? I've been asked to moderate a panel discussion ("Bridging the Valley of Death") at the upcoming Society for Laboratory Automation and Screening conference in San Diego. It's a session moderated by Bill Janzen from the University of North Carolina and Michelle Palmer from the Broad Institute, and the panelists are John Luk from the National University of Singapore, Rudy Juliano from UNC, Mao Mao from Pfizer (San Diego), Alan Palkowitz from Eli Lilly, and John Reed from Sanford-Burnham.
The discussion will be live-streamed (I'll put up the link that day), so if you're interested in that sort of thing, tune in. And as it says here, questions will be gathered "through social media sites, expert opinions and audience participation". And since this is one of those social media sites, more or less, I'd like to do some preparation by asking the question that I led off this post with. What would you like to see asked? What are the biggest issues and stumbling blocks? What should this audience get from all this?
Feel free to add suggestions in the comments, which are much appreciated. I'll run up some Twitter hashtags as the event gets closer, as well as keeping an eye on this post. Thanks!
With all the recent talk about the NIH's translational research efforts, and the controversy about their drug screening efforts, this seems like a good time to note this interview with Francis Collins over at BioCentury TV. (It's currently the lead video, but you'll be able to find it in their "Show Guide" afterwards as well).
Collins says that they're not trying to compete with the private sector, but taking a look at the drug development process "the way an engineer would", which takes me back to this morning's post re: Andy Grove. One thing he emphasizes is that he believes that the failure rate is too high because the wrong targets are being picked, and that target validation would be a good thing to improve.
He's also beating the drum for new targets to come out of more sequencing of human genomes, but that's something I'll reserve judgment on. The second clip has some discussion of the DARPA-backed toxicology chip and some questions on repurposing existing drugs. The third clip talks about the FDA's role in all this, and tries to clarify what NIH's role would be in outlicensing any discoveries. (Collins also admits along the way that the whole NCATS proposal has needed some clarifying as well, and doesn't sound happy with some of the press coverage).
Part 5 (part 4 is just a short wrap-up) discusses the current funding environment, and then moves into ethics and conflicts of interest - other people's conflicts, I should note. Worth a lunchtime look!
Science is reporting some problems with the NIH's drug screening efforts:
A $70-million-a-year program launched 7 years ago at the National Institutes of Health (NIH) to help academic researchers move into industry-style drug discovery may soon be forced to scale back sharply. NIH Director Francis Collins has been one of its biggest champions. But the NIH Molecular Libraries, according to plan, must be weaned starting next year from the NIH director's office Common Fund and find support at other NIH institutes. In a time of tight budgets, nobody wants it.
The fate of the Molecular Libraries program became “an extremely sensitive political issue” earlier this year when NIH realized it would not be easy to find a new home for the program, said one NIH official speaking on background. . .
. . .John Reed, head of the Sanford-Burnham Medical Research Institute screening center in San Diego, which receives about $16 million a year from the Common Fund, says his center has so far attracted only modest funding from drug companies. He expressed frustration with the Common Fund process. “NIH has put a huge investment into [the Molecular Libraries], and it's running very well,” he says. “If there's not a long-term commitment to keep it available to the academic community, why did we make this hundreds of millions of dollars investment?”
Good question! This all grew out of the 2003 "NIH Roadmap" initiative - here's a press release from better days. But it looks partly to be a victim of sheer bad timing. There's not a lot of extra money sloshing around the drug industry these days, and there sure isn't a lot in NIH's budget, either. You wouldn't know that there's a problem at all from looking at the program's web site, would you?
Since I know there are readers out there from both sides of this particular fence, I'd be interesting in hearing some comments. Has the screening initiative been worthwhile? Should it be kept up - and if so, how?
I have just enough time today to link to this - which is simultaneously a nasty prank to pull on someone, and (for anyone who's been to grad school), completely hilarious. A message went out over a mail server list in Europe, after a post-doc position in Germany had been posted. It, um, clarified the nature of the position:
I am desperately searching for eager victims - postdocs or PhD students - mine or other supervisors' - to make my workhorses and to plunder ideas from. . .I cannot do research myself because I'm narrow-minded, rigid-brained, and petty. Therefore, I have to recruit desperate scientists from anywhere in the world and then manage (harangue) them into submission. The smarter you are relative to me, the more I will hate you. . .
It goes on in that vein for a while, winding up with the usual boilerplate legal language: "I am entitled to success because supremacy is my birthright".. Read it, cast you mind back to your own grad student/post-doc days, and imagine the temptation to do the same!
Here's another article in the Guardian that makes some very good points about the way we judge scientific productivity by published papers. My favorite line of all: "To have "written" 800 papers is regarded as something to boast about rather than being rather shameful." I couldn't have put it better, and I couldn't agree more. And this part is just as good:
Not long ago, Imperial College's medicine department were told that their "productivity" target for publications was to "publish three papers per annum including one in a prestigious journal with an impact factor of at least five.″ The effect of instructions like that is to reduce the quality of science and to demoralise the victims of this sort of mismanagement.
The only people who benefit from the intense pressure to publish are those in the publishing industry.
Working in industry feels like more of a luxury than ever when I hear about such things. We have our own idiotic targets, to be sure - but the ones that really count are hard to argue with: drugs that people will pay us money for. Our customers (patients, insurance companies, what have you) don't care a bit about our welfare, and they have no interest in keeping our good will. But they pay us money anyway, if we have something to offer that's worthwhile. There's nothing like a market to really get you down to reality.
So, are half the interesting new results in the medical/biology/med-chem literature impossible to reproduce? I linked earlier this year to an informal estimate from venture capitalist Bruce Booth, who said that this was his (and others') experience in the business. Now comes a new study from Bayer Pharmaceuticals that helps put some backing behind those numbers.
To mitigate some of the risks of such investments ultimately being wasted, most pharmaceutical companies run in-house target validation programmes. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. Talking to scientists, both in academia and in industry, there seems to be a general impression that many results that are published are hard to reproduce. However, there is an imbalance between this apparently widespread impression and its public recognition. . .
Yes, indeed. The authors looked back at the last four years worth of oncology, women's health, and cardiovascular target validation efforts inside Bayer (this would put it right after they combined with Schering AG of Berlin). They surveyed all the scientists involved in early drug discovery in those areas, and had them tally up the literature results they'd acted on and whether they'd panned out or not. I should note that this is the perfect place to generate such numbers, since the industry scientists are not in it for publication glory, grant applications, or tenure reviews: they're interested in finding drug targets that look like they can be prosecuted, in order to find drugs that could make them money. You may or may not find those to be pure or admirable motives (I have no problem at all with them, personally!), but I think we can all agree that they're direct and understandable ones. And they may be a bit orthogonal to the motives that led to the initial publications. . .so, are they? The results:
"We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings. In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects. . ."
So Booth's estimate may actually have been too generous. How does this gap get so wide? The authors suggest a number of plausible reasons: small sample sizes in the original papers, leading to statistical problems, for one. The pressure to publish in academia has to be a huge part of the problem - you get something good, something hot, and you write that stuff up for the best journal you can get it into - right? And it's really only the positive results that you hear about in the literature in general, which can extend so far as (consciously or unconsciously) publishing just on the parts that worked. Or looked like they worked.
But the Bayer team is not alleging fraud - just irreproducibility. And it seems clear that irreproducibility is a bigger problem than a lot of people realize. But that's the way that science works, or is supposed to. When you see some neat new result, your first thought should be "I wonder if that's true?" You may have no particular reason to doubt it, but in an area with as many potential problems as discovery of new drug targets, you don't need any particular reasons. Not all this stuff is real. You have to make every new idea perform the same tricks in front of your own audience, on your own stage under bright lights, before you get too excited.
Since I don't have to write NSF grants, I haven't had to wrestle with "Criterion 2". But ask anyone in academic science about it. The first criterion is intellectual merit, as it darn well should be. Here's the NSF's own description (in full):
How important is the proposed activity to advancing knowledge and understanding within its own field or across different fields? How well qualified is the proposer (individual or team) to conduct the project? (If appropriate, the reviewer will comment on the quality of prior work.) To what extent does the proposed activity suggest and explore creative, original, or potentially transformative concepts? How well conceived and organized is the proposed activity? Is there sufficient access to resources?
But the second criterion, while initially worthy-sounding, invites trouble. It's "What are the broader impacts of the proposed activity?" Here's more description:
How well does the activity advance discovery and understanding while promoting teaching, training, and learning? How well does the proposed activity broaden the participation of underrepresented groups (e.g., gender, ethnicity, disability, geographic, etc.)? To what extent will it enhance the infrastructure for research and education, such as facilities, instrumentation, networks, and partnerships? Will the results be disseminated broadly to enhance scientific and technological understanding? What may be the benefits of the proposed activity to society?
To me, that puts the most important question last, and even that one can be hard to answer. As for the rest, this would seem to be an open invitation to insert all sorts of nice-sounding boilerplate, or to just start making things up. The NSF itself seems to have realized this, and has been working on a revised version of this language, but here's a column from Dan Sarewitz that says that "Criterion 2.1" isn't a bit better than the old one:
At the heart of the new approach is "a broad set of important national goals". Some address education, training and diversity; others highlight institutional factors ("partnerships between academia and industry"); yet others focus on the particular goals of "economic competitiveness" and "national security". The new Criterion 2 would require that all proposals provide "a compelling description of how the project or the [principal investigator] will advance" one or more of the goals.
The nine goals seem at best arbitrary, and at worst an exercise in political triangulation. . .Yet, more troubling than the goals themselves is the problem of democratic legitimacy. In applying Criterion 2, peer-review panels will often need to choose between projects of equal intellectual merit that serve different national goals. Who gave such panels the authority to decide, for example, whether a claim to advance participation of minorities is more or less important than one to advance national security?
. . .Motivating researchers to reflect on their role in society and their claim to public support is a worthy goal. But to do so in the brutal competition for grant money will yield not serious analysis, but hype, cynicism and hypocrisy.
One of the comments to that article points out that this isn't the NSF's fault, in a way, because this exact language was mandated by Congress. And so it is - take a look at Section 526 of Title V of H.R. 5116, the "America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Reauthorization Act of 2010". There's all the same language. Not only that, but the Act directs the NSF to assign people and funds to evaluating how well all these "Broader Impact" measurements are going. The director, within six months, is supposed to have implemented a policy that:
. . .requires principal investigators applying for Foundation research grants to provide evidence of institutional support for the portion of the investigator's proposal designed to satisfy the Broader Impacts Review Criterion, including evidence of relevant training, programs, and other institutional resources available to the investigator from either their home institution or organization or another institution or organization with relevant expertise.
So, in case you've lost track, the NSF is supposed to train people to implement a policy that requires grant applicants to show that their institutions are training people to implement a policy that requires grant applicants to show evidence that their work involves training people to implement a policy. I think I've got that right. A greater invitation to bullshit I cannot picture.
The NIH has, it appears, been getting quite sensitive about conflicts of interest. There have been some rather ugly scenes involving ghostwritten articles (and entire books), and NIH director Francis Collins has said that the agency's guidelines are in the process of being revised.
You'd have thought that the existing ones would have banned that sort of thing, anyway. And in fact, it seems as if many scientists at the NIH already find the rules too restrictive. From the original paper that looked into this:
Eighty percent of respondents believed the NIH ethics rules were too restrictive. Whereas 45% of respondents believed the rules positively impacted the public's trust in the NIH, 77% believed the rules hindered the NIH's ability to complete its mission.
The problem, as so often happens, is whether your goal is to look good or to do your job, and you don't want to solve that conflict by redefining your job as just to look good all the time.
The reason I'm talking about all this is that I've heard of instances where people from NIH have refused (or felt as if they have had to refuse) invitations to give talks in industrial settings, because they feared conflict-of-interest problems. This seems perverse, especially for an agency that's talking about getting heavily into translational drug research. That'll have to lead to numerous contacts with industry, I think, in order to be much good at all. So how will the NIH manage that if the drug industry is seen as contaminating their Purity of Essence?
We've talked quite a bit around here about academic (and nonindustrial) drug discovery, but those posts have mostly divided into two parts. There's the early-stage discovery work that really gets done in some places, and then there's the proposal for the big push into translational research by the NIH. That, broadly defined, is (a) the process of turning an interesting idea into a real drug target, or (b) turning an interesting compound into a real drug. One of the things that the recent survey of academic centers made clear, I'd say, is that the latter kind of work is hardly being done at all outside of industry. The former is a bit more common, but still suffers from the general academic bias: walking away too soon in order to move on to the next interesting thing. Both these translational processes involve a lot of laborious detail work, of the kind that does not mint fresh PhDs nor energize the post-docs.
But if there's funding to do it, it'll get done in some fashion, and we can expect to see a lot of people trying their hand at these things. Many universities are all for it, too, since they imagine that there will be some lucrative technology transfers waiting at the end of the process. (One of the remarkable things about the drug industry is how many people outside it see it as the place to get rich).
I had an e-mail from Jonathan Gitlin on this subject, who asks the question: if academia is going to do these things, what should they be doing to keep the money from being wasted? It's definitely worth thinking about, since there are so many drains for the money to go spiraling down. Mind you, most money spent on these things is (in the most immediate sense) wasted, since most ideas for drug targets turn out to be mistaken, and most compounds turn out not to be drugs. No matter what, we're going to have to be braced for that - even strong improvements in both those percentages would still leave us with what (to people with fresh eyes) would seem horrific failure rates.
And what I'd really like is for people to avoid the "translational research fallacy", as I've called it. That's the (seemingly pervasive) idea that there are just all sorts of great ideas for new drugs and new targets just gathering dust on university shelves, waiting for some big drug company to get around to noticing them. That, unfortunately, does not seem to be true, but it's a tempting idea, and I worry that people are going to be unable to resist chasing after it.
But that said, where would be the best place for the academic money to go? I have a few nominees. If we're breaking things down by therapeutic area, one of the most intractable and underserved is central nervous system disease. I note that there's already talk of a funding crisis in this area (although that article is more focused on Europe). It may come as a surprise to people outside medical research, but we still have very little concrete knowledge of what goes on in the brain during depression, schizophrenia, and other illnesses. That, unfortunately, is not for lack of trying. Looked at from the other end, we know vastly more than we used to, but it's still nowhere near enough.
If we're looking at general translational platforms and ideas, then I would suggest trying to come up with solid small-organism models for phenotypic screening. A good phenotypic screen, where you run compounds past a living system to see which ones give you the effects you want, can be a wonderful thing, since it doesn't depend on you having to unravel all the biochemistry behind a disease process. (It can, in fact, reveal biochemistry that you never knew existed). But good screens of this type are rare, outside of the infectious disease area, and are tricky to validate. Everyone would love to have more of them - and if an academic lab can come up with one, then those folks can naturally have first crack at screening a compound collection past them.
More suggestions welcome in the comments - it looks like this is going to happen, so perhaps we can at least seed this newly plowed field with something that we'd like to see when it sprouts.
Nature Reviews Drug Discovery has an interesting survey of academic drug discovery (summary at SciBx here). The authors were motivated, they say, by the large number of opinions and impressions about this topic, with a corresponding lack of actual data - I think they've done everyone a service.
What they found was 78 centers of academic drug discovery (in one form or another) in the US. Cancer and infectious diseases are the most widely worked-on, but tropical and orphan diseases make a strong showing (and I'm glad to see this; they should). Another interesting stat: "49% of targets being investigated are based on unique discoveries that had little validation in the literature".
But when we say "drug discovery", we should really be saying "very early stage drug discovery", with little or no actual development to follow it up. The technologies that these centers report having are almost entirely in the early part of the pipeline - screening, in vitro assay, target ID. Capacity for hit-to-lead chemistry is claimed by 72% of the centers that responded (70% response rate), which, the authors say, shows that ". . .the integration of chemistry into (academic drug discovery) centers has progressed considerably". On the other hand, only half report the ability to do in vivo assays, and less than half can do any metabolism and/or pharmacokinetics. For those who don't do this sort of thing for a living, it's worth pointing out that these functions (all of which are valuable) still only take you to the stage where you can say that you're really getting started.
So what stage are these academic projects, for the most part? Assay development and screening, for the most part - even those places with PK and the like don't have much at all in that stage yet, which, the authors say, reflects the fact that most of these centers haven't been operating for very long. (32 of the 56 centers that provided a founding date gave one between 2003 and 2008). And I particularly enjoyed this paragraph:
"Questions regarding comparisons between academic and industrial drug discovery evoked intense and informative responses. Academia was perceived to be much stronger than industry in disease biology expertise and innovation, and was considered to be better aligned with societal goals. . . By contrast, industry was perceived to be much stronger in assay development and screening, and particularly in medicinal chemistry."
I would really enjoy seeing some of the more intense responses! But a very large divide between academia and industry is apparent when the respondees were asked about their centers' priorities. Number 3 was generating intellectual property, but number one? Publications. Half of the centers say that only a quarter of their staff (or less) have industrial experience, but my impression is that these numbers are shifting rapidly - for one thing, a lot of good, experienced people from industry are becoming much more available than they ever thought they'd be.
It's also important to realize that most of this work is being done on a very modest scale. When asked about funding and expenditures, you see a long-tail distribution. A handful of centers report total expenditures in the low tens of millions, but 57% of the responding centers report $2 million or less. I'm not sure if that's per year, or total since the centers were founded, to be honest, but either way, it's not much money at all by the standards of drug research, even the early-stage stuff. Looked at another way, though, if much comes out of these efforts at all, they'll have been cost-effective for sure.
But at that point, they're facing the same problems that the rest of us do. The SciBx piece quotes Bruce Booth, whose blog I link to here regularly. And he's right on target:
“At the end of the day, it's not typically the initial chemical matter that plagues a startup spinning out of academia. Instead it's the validity of the initial biologic hypothesis and whether the biology is relevant to disease"
The Supreme Court has ruled on the Roche - Stanford case that I blogged about here. In short, the dispute centered on the Bayh-Dole act (on commercializing academic research) and sought to clarify under what circumstances university collaborators signed over the rights to their discoveries. (That makes the case sound quite calm and removed from worldly concerns, but you'll see from that earlier post that it was actually nothing of the sort!)
As I and many others had predicted, Roche prevailed. The justices upheld the ruling (7 to 2) from the Court of Appeals for the Federal Circuit that the Stanford researcher(s) involved had indeed signed over rights to Roche, and that this assignment was compatible with existing law. Here's the decision (PDF). Among the key points:
1. Stanford contended that if an invention had been realized with federal funding (NIH, etc.), that the Bayh-Dole Act automatically assigned it to the university involved. The Court noted that there are, in fact, situations where patent rights are treated this way, but that this language is conspicuously missing from Bayh-Dole. Accordingly, the invention belongs to the inventor, until the inventor assigns the rights to it. And in this case, like it or not, the Stanford post-doc involved signed things over to Cetus (as was). This inventorship business goes for industry as well, of course - one of the key pieces of paper that you sign when you join a drug company assigns the rights to whatever inventions you come up with (on company time, and with its resources) to the company. If you don't sign, you don't have a job. And on the flip side, just being employed is not enough for a company to claim an invention - there has to be an explicit statement to that effect.
Here's Justice Roberts on this point:
Stanford’s contrary construction would permit title to an employee’s inventions to vest in the University even if the invention was conceived before the inventor became an employee, so long as the invention’s reduction to practice was supported by federal funding. It also suggests that the school would obtain title were even one dollar of federal funding applied toward an invention’s conception or reduction to practice. It would be noteworthy enough for Congress to supplant one of the fundamental precepts of patent law and deprive inventors of rights in their own inventions. To do so under such unusual terms would be truly surprising. . .
You might be wondering if this argument bears on the contentions of people who claim that hey, it's all NIH money in the end, so drug companies do nothing but leech off public money, right? Why yes, yes it does. Justice Breyer (joined by Justice Ginsberg) dissents, saying that the intent of Bayh-Dole is to commercialize research, and not having title automatically assign to the university (or other recipient of federal funding) undercuts this substantially. There's a lot of talk in the dissent about the background of the act, about its real intentions, and about how it's supposed to work. And I can see the force of those arguments - but to me, they don't overcome the fact that if Congress wanted Bayh-Dole to work that way, they could have written it that way. And, in fact, they still can, if they decide that this decision illuminates a flaw that they'd like to address. Until then, though, I feel safer with the statutory language that's in there already, and how it compares to other, similar laws.
Here's an interesting note from the Wall Street Journal's Health Blog. I can't summarize it any better than they have:
"When former NIH head Elias Zerhouni ran the $30 billion federal research institute, he pushed for so-called translational research in which findings from basic lab research would be used to develop medicines and other applications that would help patients directly.
Now the head of R&D at French drug maker Sanofi, Zerhouni says that such “bench to bedside” research is more difficult than he thought."
And all across the industry, people are muttering "Do tell!" In fairness to Zerhouni, he was, in all likelihood, living in sort of a bubble at NIH. There probably weren't many people around him who'd ever actually done this sort of work, and unless you have, it's hard to picture just how tricky it is.
Zerhouuni is now pushing what he calls an "open innovation" model for Sanofi-Aventis. The details of this are a bit hazy, but it involves:
". . .looking for new research and ideas both internally and externally — for example, at universities and hospitals. In addition, the company is focusing on first understanding a disease and then figuring out what tools might be effective in treating it, rather than identifying a potential tool first and then looking for a disease area in which it could be helpful."
Well, I don't expect to see Sanofi's whole strategy laid out in the press, but that one doesn't even sound as impressive as it sounds. The "first understanding a disease" part sounds like what Novartis has been saying for some time now - and honestly, it really is one of the things that we need, but that understanding is painfully slow to dawn. Look at, oh, Alzheimer's, to pick one of those huge unmet medical needs that we'd really like to address in this business.
With a lot of these things, if you're going to first really understand them, you could have a couple of decades' wait on your hands, and that's if things go well. More likely, you'll end up doing what we've been doing: taking your best shot with what's known at the moment and hoping that you got something right. Which leads us to the success rates we have now.
On the other hand, maybe Zerhouni should just call up Marcia Angell or Donald Light, so that they can set him straight on the real costs of drug R&D. Why should we listen to a former head of the NIH who's now running a major industrial research department, when we can go to the folks who really know what they're talking about, right? And I'd also like to know what he thinks of Francis Collins' plan for a new NIH translational research institute, too, but we may not get to hear about that. . .
The "Opinionator" blog at the New York Times is trying here, but there's something not quite right. David Bornstein, in fact, gets off on the wrong foot entirely with this opening:
Consider two numbers: 800,000 and 21.
The first is the number of medical research papers that were published in 2008. The second is the number of new drugs that were approved by the Food and Drug Administration last year.
That’s an ocean of research producing treatments by the drop. Indeed, in recent decades, one of the most sobering realities in the field of biomedical research has been the fact that, despite significant increases in funding — as well as extraordinary advances in things like genomics, computerized molecular modeling, and drug screening and synthesization — the number of new treatments for illnesses that make it to market each year has flatlined at historically low levels.
Now, "synthesization" appears to be a new word, and it's not one that we've been waiting for, either. "Synthesis" is what we call it in the labs; I've never heard of synthesization in my life, and hope never to again. That's a minor point, perhaps, but it's an immediate giveaway that this piece is being written by someone who knows nothing about their chosen topic. How far would you keep reading an article that talked about mental health and psychosization? A sermon on the Book of Genesization? Right.
The point about drug approvals being flat is correct, of course, although not exactly news by now, But comparing it to the total number of medical papers published that same year is bizarre. Many of these papers have no bearing on the discovery of drugs, not even potentially. Even if you wanted to make such a comparison, you'd want to run the clock back at least twelve years to find the papers that might have influenced the current crop of drug approvals. All in all, it's a lurching start.
Things pick up a bit when Bornstein starts focusing on the Myelin Repair Foundation as an example of current ways to change drug discovery. (Perhaps it's just because he starts relaying information directly that he's been given?) The MRF is an interesting organization that's obviously working on a very tough problem - having tried to make neurons grow and repair themselves more than once in my career, I can testify that it's most definitely nontrivial. And the article tries to make a big distinction between they way that they're funding research as opposed to the "traditional NIH way".
The primary mechanism for getting funding for biomedical research is to write a grant proposal and submit it to the N.I.H. or a large foundation. Proposals are reviewed by scientists, who decide which ones are most likely to produce novel discoveries. Only a fraction get funded and there is little encouragement for investigators to coordinate research with other laboratories. Discoveries are kept quiet until they are published in peer-reviewed journals, so other scientists learn about them only after a delay of years. In theory, once findings are published, they will be picked up by pharmaceutical companies. In practice, that doesn’t happen nearly as often as it should.
Now we're back to what I'm starting to think of as the "translational research fallacy". I wrote about that here; it's the belief that there are all kinds of great ideas and leads in drug discovery that are sitting on the shelf, because no one in the industry has bothered to take a look. And while it's true that some things do slip past, I'm really not sure that I can buy into this whole worldview. My belief is that many of these things are not as immediately actionable as their academic discoverers believe them to be, for one thing. (And as for the ones that clearly are, those are worth starting a company around, right?) There's also the problem that not all of these discoveries can even be reproduced.
Bornstein's article does get it right about this topic, though:
What’s missing? For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.
“Pure science is what you’re rewarded for,” notes Dr. Barres. “That’s what you get promoted for. That’s what they give the Nobel Prizes for. And yet developing a drug is a hundred times harder than getting a Nobel Prize. . .
That kind of research is what a lot of us spend all our days doing, and there's plenty of work to fill them. As for developing a drug being harder than getting a Nobel Prize, well, apples and oranges, but there's something to it, still. The drug will cost you a lot more money along the way, but with the potential of making a lot more at the end. Bornstein's article goes off the rails again, though, when he says that companies are reluctant to go into this kind of work when someone else owns the IP rights. That's technically true, but overall, the Bayh-Dole Act on commercialization of academic research (despite complications) has brought many more discoveries to light than it's hindered, I'd say. And he's also off base about how this is the reason that drug companies make "me too" compounds. No, it's not because we don't have enough ideas to work on, unfortunately. It's because most of them (and more over the years) don't go anywhere.
Bornstein's going to do a follow-up piece focusing more on the Myelin Repair people, so I'll revisit the topic then. What I'm seeing so far is an earnest, well-meaning attempt to figure out what's going on with drug discovery - but it's not a topic that admits of many easy answers. That's a problem for journalists, and a problem for those of us who do it, too.
Nature News has a big article on the "Too Many PhDs" problem, which we've discussed several times around here:
In some countries, including the United States and Japan, people who have trained at great length and expense to be researchers confront a dwindling number of academic jobs, and an industrial sector unable to take up the slack. Supply has outstripped demand and, although few PhD holders end up unemployed, it is not clear that spending years securing this high-level qualification is worth it. . .
The piece looks at several different countries, each with its own set of problems. Japan seems to be in just awful shape as far as doctorates go; it makes the situation over here look not so bad. China, for its part, is cranking out zillions of fresh PhD holders these days, but (as the article is quite frank about) many of them aren't worth much. That isn't stopping them from getting jobs (for now), but it's something to worry about.
And we all know the picture here in the US. But this article doesn't, to my mind, do as good a job as it should. Mention is made of the problems in the pharma/biotech/life sciences industries, but all the hard numbers refer to academic positions. Looking at this graph, you'd think that academia was the main destination for all PhDs, all the time - after all, that's all that's over in the right-hand box. (I'll leave aside the poor graphic design. The same colors mean completely different things in each of those three graphs, which means that you're constantly having to tell your brain not to draw the conclusions it's trying to draw).
The article also details conditions in Germany, Poland, Egypt, and India. About the latter, I have to wonder if they're facing the same quality-control problems that China has. The good people there are quite good, but there are plenty of others. I occasionally get unsolicited e-mails from PhD candidates (or finished doctorates) from the more obscure Indian universities. They're either seeking a job, with apparently no idea who I am other than some guy with a e-mail address, or seeking advice on some aspect of chemistry that (it seems to me) they should have mastered long since. . .
There's an interesting follow-up over at SciBX to Bruce Booth's piece on the reproducibility of academic research. Booth, in his position as a venture capital purse-string holder, advocated caution and careful verification of exciting academic discoveries before starting the company-formation process.
The SciBX folks followed up with him and with several other VCs. Booth sticks to his position, and says that his firm, Atlas Venture, has allocated money to allow CROs to do reality checks on the new ideas that they see. Daphne Zohar at PureTech Ventures takes a similar line, but says that they do this sort of work with the originators of the technology, giving it a quiet shakedown before talking to investors. They do use CROs when appropriate, though.
On the other end of the spectrum, though, you have Camille Samuels at Versant Ventures:
“I think the best way to prevent yourself from funding biotechs that have a faulty scientific basis is to develop a trusting relationship with the scientific founders,” she told SciBX. “I think that starting a productive, long-term business relationship is hard to do if you use a ‘guilty before proven innocent’ approach.”
Samuels favors vetting the science with a top-notch scientific advisory team before launching a company. “If you hire great scientists to the company you will uncover the ‘over-reaching’ before you’ve spent any real money,” she noted.
I'm not so sure about that myself. While I agree that a good relationship between the VC people and the founding scientists is crucial, I think that any such relationship worthy of the name should be able to stand up to this sort of review. Everyone involved should be wise enough to realize this, and not take it personally. "Guilty until proven innocent", after all, is not such a bad attitude when you're looking at something that's interesting enough to trigger millions of dollars worth of investment. If the idea or technology is strong enough for real money, it's strong enough to handle a good shaking - and if it isn't, you'd want to know that as early as possible.
And to be honest, isn't it the same attitude that greets any big new discovery when it hits the literature? When some hot news comes out in a competitive field, the first thought of all the outside teams is "I wonder if that's real?" A big name or a trusted institution will buy a bit more benefit of the doubt, but not much, as well it shouldn't. I'm willing to believe that interesting results from a reliable research group are probably true, but I'll only put them in the "solid" category when I've seen someone else reproduce them (or have done it myself). That's science.
One morning back in 1989, a guy from Stanford visited the biotech company Cetus and signed a few forms. That action has gradually become the central issue in a nasty patent dispute that's dragged on for years. Roche (who bought Cetus in 1991) and Stanford have been fighting it out through the judicial system, and earlier this year they made their cases before the Supreme Court, who will probably deliver a decision next month. So how did a quick signature
How did This article in Science has a good summary of the details (here's another). What seems to have happened was Thomas Merigan at Stanford sent a postdoc, Mark Holodniy, over to Cetus to learn about their PCR technology. Holodniy signed an agreement to respect Cetus' intellectual property, the standard sort of thing - you'd think. But that's the problem. Ten years later, Stanford (building on work from the Merigan lab and its collaboration with Cetus) received patents on a method to quantify viral RNA in human serum, which turned into a useful assay for monitoring HIV. Roche began to sell kits to do just that in 1996, and starting in 2000, Stanford started pressing them to pay licensing fees to the university.
Roche didn't, so Stanford sued, then Roche claimed that the Stanford patents were invalid, anyway. We'll get back to that question, but the rest of the court cases have turned on a different matter: did what exactly did Holodniy sign away, and was he bound by that agreement, or did that extend to the whole Merigan lab and to Stanford? A district court said that the Bayh-Dole act (which among other things prevents university researchers from cutting patent deals independent of the university), won out, and that Holodniy's Cetus form, which said that he was assigning patent rights to Cetus, was therefore invalid. But the Court of Appeals for the Federal Circuit completely reversed that, and said that Holodniy's agreement (when he was hired) to assign patents to Stanford was just a promise for the future ("I agree to assign. . .", whereas the Cetus agreement took force immediately ("I do hereby assign. . .") and took priority. And thus to the Supreme Court
Academia (and the US Solicitor General) have lined up on Stanford's side, and industry on Roche's, as anyone could have foreseen. If Roche wins, say the former, then no university research group will want to work with industry. If Stanford wins, say the latter, than no corporation will want to work with academia. Here's a hard-core legal summary from the Cornell law school. Their conclusion:
. . .the Supreme Court will decide whether the Bayh-Dole Act precludes an inventor working on a federally funded project from assigning his ownership rights in the invention to a third party. Stanford argues that both the Act and public policy considerations require that research institutions get an exclusive opportunity to patent their employees’ creations. Stanford contends that, if research institutions did not receive this privilege, they would hesitate to pursue costly and time-consuming research projects. Roche, on the other hand, argues that the Bayh-Dole Act did not affect the longstanding rule allowing inventors to assign their ownership rights to third parties. Constitutional and equitable considerations, Roche asserts, caution against Stanford’s interpretation of the Act.
My guess is that Roche will probably win, and that academic/university collaboration will continue anyway, but under even more strictly defined rules. MIT, for example, has already changed its patent assignment forms to the present tense, in a sign that they think that this argument has validity (even though the university has sided with Stanford in this case). One thing that's been lost in all the dust is whether this whole question had to come up. If Stanford's patents were to have been invalidated (another case in itself), then the whole Bayh-Dole argument would have been a moot point. None of the later legal wrangling has addressed this point. As often happens in the courtroom and on the battlefield, the armies end up fighting for larger stakes (and in a different place) than anyone would have predicted at first.
Venture-capital guy Bruce Booth has a provocative post, based on experience, about how reproducible those papers are that make you say "Someone should try to start a company around that stuff".
The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.
Why such a high failure rate? Booth's own explanation is clearly the first one to take into account - that academic labs live by results. They live by publishable, high-impact-factor-journal results, grant-renewing tenure-application-supporting results. And it's not that there's a lot of deliberate faking going on (although there's always a bit of that to be found), as much as there is wishful thinking and running everything so that it seems to hang together just well enough to get the paper out. It's a temptation for everyone doing research, especially tricky cutting-edge stuff that fails a lot of the time anyway. Hey, it did work that time, so we know that it's real - those other times it didn't go so smoothly, well, we'll figure out what the problems were with those, but for now, let's just write this stuff up before we get scooped. . .
Even things that turn out to be (mostly) correct often aren't that reproducible, at least, not enough to start raising money for them. Booth's advice for people in that situation is to check things out very carefully. If the new technology is flaky enough that only a few people can get it to work, it's not ready for the bright lights yet.
He also has some interesting points on "academic bias" versus "pharma bias". You hear a lot about the latter, to the point that some people consider any work funded by the drug industry to be de factotainted. But everyone has biases. Drug companies want to get compounds approved, and to sell lots of them once that happens. Academic labs want to get big, impressive publications and big, impressive grants. The consequences of industrial biaes and conflicts of interest can be larger, but if you're working back at the startup stage, you'd better keep an eye on the academic ones. We both have to watch ourselves.
Recent advances in neuroscience offer unprecedented opportunities to discover new treatments for nervous system disorders. However, most promising compounds identified through basic research are not sufficiently drug-like for human testing. Before a new chemical entity can be tested in a clinical setting, it must undergo a process of chemical optimization to improve potency, selectivity, and drug-likeness, followed by pre-clinical safety testing to meet the standards set by the Food and Drug Administration (FDA) for clinical testing. These activities are largely the domain of the pharmaceutical industry and contract research organizations, and the necessary expertise and resources are not commonly available to academic researchers.
To enable drug development by the neuroscience community, the NIH Blueprint for Neuroscience Research is establishing a ‘virtual pharma’ network of contract service providers and consultants with extensive industry experience. This Funding Opportunity Announcement (FOA) is soliciting applications for U01 cooperative agreement awards from investigators with small molecule compounds that could be developed into clinical candidates within this network. This program intends to develop drugs from medicinal chemistry optimization through Phase I clinical testing and facilitate industry partnerships for their subsequent development. By initiating development of up to 20 new small-molecule compounds over two years (seven projects were launched in 2011), we anticipate that approximately four compounds will enter Phase 1 clinical trials within this program.
My first thought is that I'd like to e-mail that first paragraph to Marcia Angell and to all the people who keep telling me that NIH discovers most of the drugs on the market. (And as crazy as that sounds, I still keep running into people who are convinced that that's one of those established facts that Everyone Knows). My second thought is that this is worth doing, especially for targeting small or unusual diseases. There could well be interesting chemical matter or assay ideas floating around out there, looking for the proper environment to have something made of them.
My third thought, though, is that this could well end up being a real education for some of the participants. Four Phase I compounds out of twenty development candidates - it's hard to say if that's optimistic or not, because the criteria for something to be considered a development candidate can be slippery. And that goes for the drug industry too, I hasten to add. Different organizations have different ideas about what kinds of compounds are worth taking to the clinic, and those criteria vary by disease area, too. (Sad to say, they can also vary by time of the year and the degree to which bonuses are tied to hitting number-of-clinical-candidate goals, and anyone who's been around the business a while will have seen that happen, to their regret).
It'll be interesting to see how many people apply for this; the criteria look pretty steep to me:
Applicants must have available small-molecule compounds with strong evidence of disease-related activity and the potential for optimization through iterative medicinal chemistry. Applicants must also be able to conduct bioactivity and efficacy testing to assess compounds synthesized in the development process and provide all pre-clinical validation for the desired disease indication. . .This initiative is not intended to support development of new bioactivity assays, thus the applicant must have in hand well-characterized assays and models.
Hey, there are small companies out there that don't come up to that standard. To clarify, though, the document does say that "Evaluation of the approach should focus primarily on the rationale and strengths/weaknesses of proposed bioactivity studies and compound "druggability," since all other drug development work (e.g., medicinal chemistry, PK/tox, phase I clinical testing) will be designed and implemented by NIH-provided consultants and contractors after award", which must come as something of a relief.
What's interesting to me, though, is that the earlier version of this RFA (from lsat year) had the following language:
The ultimate goals of this Neurotherapeutics Grand Challenge are to produce at least one novel and effective drug for a nervous system disorder that is currently poorly treated and to catalyze industry interest in novel disease targets by demonstrating early-stage success.
That's missing this time around, which is a good thing. If they're really hoping for a drug to come out of four Phase I candidates in poorly-treated CNS disorders, then I'd advise them to keep that thought well hidden. The overall attrition rate in the clinic in CNS is somewhere around (and maybe north of) 90%, and if you're going to go after the tough end of that field it's going to be even steeper.
Here's a good article ("Academia Faces PhD Overload") via Genomeweb on the academic post-doc situation in the sciences, which we were last discussing here. (Thanks to Jonathan Gitlin on Twitter for noting it). That was in response to a Nature News piece calling for more "permanent postdoc" positions, which I doubted would actually happen.
But perhaps it is - take a look at this part:
Since there aren't enough tenure-track jobs for every PhD who has taken one, two, or even three-plus postdocs, "there's a finite number of postdocs who cannot anymore be a postdoc, and so they [often] stay at the same institution and become appointed to the research faculty," Chalkley says. As a result of the postdoc surplus, "the numbers in the research faculty ranks have increased in the last decade," he adds.
As research faculty are not eligible for tenure themselves, their positions depend largely on their PI, who generally is. Non-tenure-track faculty are "dependent upon the person running a lab and their funding," Chalkley says, adding that the risk for research faculty, who are "almost invariably on soft money," is real. For example, should a PI decide to move to another institution, he or she might be reluctant to take research faculty along; instead, he or she could save start-up funds for the new lab by hiring postdocs in place of research instructors.
With no practical solutions to the postdoc surplus problem on the horizon, Minnesota's Levitt predicts this hiring trend will persist for some time. "Every school is going to be hiring a higher and higher fraction of non-tenure-track [faculty]," he says.
But as the article says elsewhere, no one is claiming that this is going to be especially good for the people being hired under these circumstances, except as an alternative to being out on the sidewalk. One extra reason for this whole demographic difficulty (which has always been with us to some degree) was been the big increase in the NIH budget from 1998 to 2003, which led to a corresponding bulge in the population of grad students, and then of postdocs:
According to the National Science Foundation's most recent Survey of Earned Doctorates statistics, American institutions awarded 49,562 total doctorates in 2009 — the most ever reported by NSF — of which 25,836 were in the sciences. Of life sciences doctorate recipients who indicated definite post-graduation employment commitments in 2009, nearly three-quarters said they'd accepted postdoc appointments. In an InfoBrief report, NSF notes that "2009 marked the largest single-year increase in the proportion of doctorate recipients taking postdoc positions during the 2004-2009 period."
And this just in time for a whacking economic downturn, which has severely cut into the industrial job possibilities. Still, there's a discussion in this article on getting people to look outside academia for their future, but the attitude that I mentioned in my last post on this topic is still a problem:
Nearly half of all respondents to NYU's most recent annual postdoc satisfaction survey — 47 percent — indicated a career goal of becoming tenure-track faculty.
"I think there's a bigger need for information on jobs outside of academia," Micoli says. There's a growing awareness in the research community that PhDs who choose careers in industry or other academic alternatives are not failing as scientists — but that sentiment has not yet penetrated the walls of the ivory tower, he adds.
Hey, these days, landing a good industrial job is very far indeed from failing. . .
Here's a call to make something different out of the postdoctoral position. Says Jennifer Rohn in Nature News:
". . .we should professionalize the postdoc role and turn it into a career rather than a scientific stepping stone.
Consider the scientific community as an ecosystem, and it is easy to see why postdocs need another path. The system needs only one replacement per lab-head position, but over the course of a 30–40-year career, a typical biologist will train dozens of suitable candidates for the position. The academic opportunities for a mature postdoc some ten years after completing his or her PhD are few and far between. . .
The scientific enterprise is run on what economists call the 'tournament' model, with practitioners pitted against one another in bitter pursuit of a very rare prize. Given that cheap and disposable trainees — PhD students and postdocs — fuel the entire scientific research enterprise, it is not surprising that few inside the system seem interested in change. . .Few academics could afford to warn trainees against entering the ring — if they frightened away their labour force, research would grind to a halt.
Her proposed solution is to reduce the numbers of people being trained as graduate students, and staff up some permanent non-lab-head research positions. We'll debate the merits of that idea in just a moment, but right off, I have a hard time seeing how this could (or would) ever be adopted. Basically, it's asking academic research departments to act against what they see as their own interests. Those relatively cheap workers that you bring in every year, push along, and move out the door? Why don't you replace them with more expensive people who never leave?
No, even if too many people are going through graduate programs, I think that the only way to see real changes is for the people responsible to believe that those changes are desirable - that they're something they want to do, something that's beneficial for them. If the current system can trundle along, taking in fresh students and excreting PhDs, then it probably will continue doing just that. The whole academic research system runs on bringing in grant money (and its overhead), and for that you need bodies in the lab. Bodies generate results, and results are what you need for grant renewals, which give you money to hire more bodies as the earlier crop leaves.
Leaves for what? Well, "when the rocket goes up, who cares where it comes down?" What the graduate students (and postdocs) go on to is, from the university's perspective, not really their problem. And that's why I don't see this proposal going anywhere: it's asking the academic research establishment to do something for the postdocs of the world, to which the answer will be an eloquent indifference.
OK, even if it's not going to happen, should it (in some other world)? Actually, in several labs I've known, it already does. I think many of us have seen "perpetual postdocs", people who just seem to hang around the labs forever, acting as right-hand-assistants to the boss. To be honest, I've always seen the situation these people are in as sort of sad, but compared to unemployment, I suppose not.
But that brings up another aspect of this proposal - its near-total academocentricity. Read it, and you'd never get the idea that there's anything outside the university research environment. The whole point of life is to become a lab head, bringing in the grant money and taking on graduate students. Right? This is the world view of someone who's been in academia too long (or at least bought too thoroughly into its culture). There are places to do research outside of the ivy-covered walls. Not as many of them as there were a few years ago, true, and that's another whopper of a problem, one that gets discussed around here with great frequency. The traditional answer to "I can't find a faculty position" has been "Go and find a job, then". If that part of the ecosystem is permanently broken, then post-docs have even more trouble than the Nature column is imagining. . .
I have tried several times to get my hands around what NIH head Francis Collins is talking about here (note: open-access article), but I now admit defeat. Allow me to quote a bit, and we'll see if anyone else out there has more luck:
We have seen a deluge of new discoveries in the last few years on the molecular basis of disease. . .(But despite) increasing investments by the private sector, there has been a downturn in the number of approved new molecular entities over the last few years. Also, drug development research remains very expensive and the failure rate is extremely high.
Perhaps in part responding to these factors, and to the downturn in the economy, pharmaceutical companies have cut back their investments in research and development. We can't count on the biotech community to step in and fill that void either, because they are hurting from an absence of long-term venture capital support. So, we have this paradox: we have a great opportunity to develop truly new therapeutic approaches, but are undergoing a real constriction of the pipeline. One solution is to come up with a non-traditional way of fostering drug development — through increased NIH involvement.
Hmm. I may have missed the deluge that he's talking about, but we'll set that concern aside. What might this "non-traditional way" look like? Collins again:
I like to think of this in a broad sense of “what kind of paradigm can we initiate and expand between academic researchers and the private sector to move the therapeutic agenda forward?” . . .By having the NIH more engaged in the pipeline, we can also ask whether we can improve the success rates of drug development. . .We need to re-engineer the process, with a lot more focus on the front end.
Right! Another thick block of wobbling gelatin. Let's see, we're going to get the NIH engaged, and, um, give them the tools, and re-engineer things, and oh yeah, focus. Definitely going to focus. Any more details to add?
There are a lot of moving parts to this set of resources that ultimately need to be synthesized into a smooth process. One of my goals over the next year is to try to identify ways to put these together into a more seamless enterprise.
Good to hear. Please, those of you with access to (see above) Nature Reviews Drug Discovery, where this interview appeared, take a look and see if you can condense anything more out of it than I did. I mean, King Lear had a more concrete plan of action than this one: "I will do such things - what they are, yet I know not, but they shall be the terrors of the earth."
Update: an NRDD editor has let me know that the interview is open access. He also points out that the piece was done before the official announcement of the NCATS idea. My take is while that might account for a bit of the fuzziness, everything I've seen since then has been similarly soft-focus. . .
I wrote here the other day about the NIH's new translational medicine plans. The New York Times article that brought this to wide attention didn't go over well with director Francis Collins, who ended up trying to disabuse people of the idea that the NIH was going to set up its own drug company.
But there's been an overwhelming negative response from the academic research community, largely driven (it seems) by worries about funding. Given the state of the budget, flat funding would be seen as a victory by NIH, so this isn't the best environment to be talking about putting together a great new institute. The money for it will, after all, have to come out of someone else's pile. Collins spends most of that statement linked above denying this, but it's hard to see how there won't be problems.
I think, though, that there's an even more fundamental problem here. In the latest BioCentury, there's an interesting sidelight on all this:
In comments submitted to NIH, Joseph Zaia, associate director of the Center for Biomedical Mass Spectrometry at the Boston University School of Medicine, argued against setting timetables for research results. “I do not believe that running medical science on a short sighted business time schedule will produce more cures faster. It will, however, deplete NIH resources very rapidly and possibly tear down an infrastructure of knowledge that took decades to create.”
Zaia complained that the NCATS “process seems to be driven by the FasterCures movement sponsored by Michael Milken,” which he said has “been masterful in manipulating the political system for their purposes, and forcing NIH into this reorganization.”
FasterCures’ Margaret Anderson, executive director of the non-profit group that advocates for accelerating medical innovation, submitted a letter strongly endorsing NCATS, which she said “will provide a significant stimulus to moving ideas out of the lab and into the clinic.”
And that's the problem. Over the last few years, an idea has taken hold that there are all kinds of great ideas for all kinds of diseases that no one is doing anything with. Now, I'm not going to claim that everyone is trying every single thing that could possibly be tried, but I really don't see how there's this treasure chest of great discoveries that aren't being followed up on. Drug companies of all sizes are always watching for such opportunities - I've been a part of many such efforts to jump on these as they show up.
My guess is that many of these advocates have a different definition of what a "great discovery" is than I do. There are all kinds of things that come out in the literature, often with breathless press releases from the university PR office, that make it sound like the latest JBC paper has the cure for cancer in it. But the huge majority of these things don't pan out, generally because they're just part of a much, much larger (and more complicated) story. And that's why things tend to fail on the way to (and through) the clinic.
Am I exaggerating? Well, many advocates in this area have taken to using the phrase "valley of death" to describe the gap between basic research and success in the clinic. Here's Amy Rick of the Parkinson's Action Network:
Rick said patient groups are concerned that the valley of
death is growing, and they want government to help bridge it. The prospect that there are “good discoveries that are basically collecting dust” is “terrifying to patients,” she said.
“What we are finding from a patient perspective is that discoveries that are being made in very exciting basic research are not being acted upon,” Rick told BioCentury This Week. “They are not moving through the pipeline. So the patient community is pushing very hard — if private money isn’t filling that space, the government should be moving some of its funding into that space.”
I have a great deal of sympathy for the patient population - they're our customers in this business, after all, and any one of us could join their ranks at any time. (Drug company researchers come down with all the maladies that everyone else does). But the patient population is not the group of people discovering and developing drugs. What looks like agonizingly slow progress from outside is often just the best that can be done. It can be hard to imagine how crazy, complex, and frustrating medical research can be unless you've tried doing it. Nothing else quite compares.
I worry that some of these people have an unrealistic view of how things work (or should work). This all reminds me of Andrew Grove, ex-Intel, and his complaints that the drug research business wasn't moving as fast as the semiconductor industry. It sure isn't. That's because it's a lot harder.
The Biocentury article is right in line with my thinking here:
FASEB’s Talman argues that patient groups and the public are overly optimistic about the breakthroughs that could be made by shifting resources to translational science. He believes basic scientists are partly to blame because “there is too much of a tendency for basic or clinical scientists to sell our work.” In the process, he said, “we can come across as saying that the newest discovery can lead to a cure.”
Senior NIH officials have contributed to the belief that cures are around the corner by dangling the prospect of quick payoffs in front of congressional appropriators. For example, in 1999, Gerald Fischbach, then director of the National Institute of Neurological Diseases and Stroke, told a Senate committee that with sufficient funding it was reasonable to expect a cure for Parkinson’s disease within five years. The NINDS budget has increased from $902 million in FY99 to $1.6 billion in FY10, but PD hasn’t been cured.
Starting in 2004, National Cancer Institute Director Andrew von Eschenbach claimed in numerous public speeches that it would be possible to “end suffering and death from cancer by 2015,” a claim that current NCI Director Harold Varmus has repudiated.
When he led the human genome sequencing effort, NIH Director Collins himself made comments that the press, public and politicians interpreted as promising that it would directly and quickly lead to new medicines for common diseases.
“There is a real danger of over-promising,” Keith Yamamoto, executive vice dean of the University of California San Francisco School of Medicine, told BioCentury. “Scientists too often take an intellectual short cut. They think they will not be able to explain the nuances of why basic discovery takes so long, so they just say if you give me the money we are about to cure the disease.”
He added: “That’s thin ice — it is our responsibility to explain why things are as difficult as they are.”
It sure is. I know that patients and the general public get tired of hearing about how it's hard, how discoveries take time, all that sort of thing, while the diseases just keep marching on and on. But it's all true. I honestly don't think that most people realize, despite that huge amounts of knowledge we've managed to accumulate, just how little we know about what we're doing.
I've been meaning to comment on the NIH's new venture into drug discovery, the National Center for Advancing Translational Sciences. Curious Wavefunction already has some thoughts here, and I share his concerns. We're both worried about the gene-o-centric views of Francis Collins, for example:
Creating the center is a signature effort of Dr. Collins, who once directed the agency’s Human Genome Project. Dr. Collins has been predicting for years that gene sequencing will lead to a vast array of new treatments, but years of effort and tens of billions of dollars in financing by drug makers in gene-related research has largely been a bust.
As a result, industry has become far less willing to follow the latest genetic advances with expensive clinical trials. Rather than wait longer, Dr. Collins has decided that the government can start the work itself.
“I am a little frustrated to see how many of the discoveries that do look as though they have therapeutic implications are waiting for the pharmaceutical industry to follow through with them,” he said.
Odd how the loss of tens of billions of dollars - and vast heaps of opportunity cost along the way - will make people reluctant to keep going. And where does this new center want to focus in particular? The black box that is the central nervous system:
Both the need for and the risks of this strategy are clear in mental health. There have been only two major drug discoveries in the field in the past century; lithium for the treatment of bipolar disorder in 1949 and Thorazine for the treatment of psychosis in 1950.
Both discoveries were utter strokes of luck, and almost every major psychiatric drug introduced since has resulted from small changes to Thorazine. Scientists still do not know why any of these drugs actually work, and hundreds of genes have been shown to play roles in mental illness — far too many for focused efforts. So many drug makers have dropped out of the field.
So if there are far too many genes for focused efforts (a sentiment with which I agree), what, exactly, is this new work going to focus on? Wavefunction, for his part, suggests not spending so much time on the genetic side of things and working, for example, on one specific problem, such as Why Does Lithium Work for Depression? Figuring that out in detail would have to tell us a lot about the brain along the way, and boy, is there a lot to learn.
Meanwhile, Pharmalot links to a statement from the industry trade group (PhRMA) which is remarkably vapid. It boils down to "research heap good", while beating the drum a bit for the industry's own efforts. And as an industrial researcher myself, it would be easy for me to continue heaping scorn on the whole NIH-does-drug-discovery idea.
But I actually wish them well. There really are a tremendous number of important things that we don't know about this business, and the more people working on them, the better. You'd think. What worries me, though, is that I can't help but believe that a good amount of the work that's going to be done at this new center will be misapplied. I'm really not so sure that the gene-to-disease-target paradigm just needs more time and money thrown at it, for example. And although there will be some ex-industry people around, the details of drug discovery are still likely to come as a shock to the more academically oriented people.
Put simply, the sorts of discoveries and project that make stellar academic careers, that get into Science and Nature and all the rest of them, are still nowhere near what you need to make an actual drug. It's an odd combination of inventiveness and sheer grunt work, and not everyone's ready for it. One likely result is that some people will just avoid the stuff as much as possible and spend their time and money doing something else that pleases them more.
What do I think that they should be doing, then? One possibility is the Pick One Big Problem option that Wavefunction suggests. What I'd recommend would also go against the genetic tracery stuff: I'd put money into developing new phenotypic assays in cells, tissues, and whole animals. Instead of chasing into finer and finer biochemical details in search of individual targets, I'd try to make the most realistic testbeds of disease states possible, and let the screening rip on that. Targets can be chased down once something works.
But it doesn't sound like that's what's going to happen. So, reluctantly, I'll make a prediction: if years of effort and billions of dollars thrown after genetic target-based drug discovery hasn't worked out, when done by people strongly motivated to make money off their work, then an NIH center focused on the same stuff will, in all likelihood, add very little more. It's not like they won't stay busy. That sort of work can soak up all the time and money that you can throw at it. And it will.
A reader sent this along to me, and I figured that many folks who are in (or have been through) academia can relate. This is the Hui Zheng lab at Baylor, with their Gaga-esque production of. . .Bad Project:
Congratulations to them. It's a good thing that there was no YouTube back when I was in that position, or I might have gotten myself in a lot of trouble. . .
The latest post in the week-long blog roundtable on chemistry jobs is up over at Chembark, and it looks at the academic side: is tenure useful? If so, do its disadvantages outweigh the benefits? What would happen if we ditched it (and could we)?
Here's an interesting question from a reader in academia. At his institution, they're thinking about rewriting the introductory organic lab syllabus. "Rather than put what the faculty would like to see in it", he writes, "what would your readers like to see in it?"
The questions he raises include these: What organic chemistry lab basics should non-majors be sure to get? And which ones should the chemistry majors have for their advanced courses to build on? What kinds of experiments should be included (and what classics are ready to be dropped?) And which sort of lab curriculum trains people better - the "discovery"-oriented type, or the "cookbook" type?
Add your thoughts in the comments below. I don't know what specific experiments are common in undergraduate labs these days, so I'll let those who are comment on the details. My take on the last question is that the course should probably start in more of a cookbook fashion, to get everyone's fingers wet, but finish up with some sort of parallel-synthesis or method-finding exercise, where everyone gets a chance to do something different and make a small exploration along the way.
I've been reading an interesting new paper from Stuart Schreiber's research group(s) in PNAS. But I'm not sure if the authors and I would agree on the reasons that it's interesting.
This is another in the series that Schreiber has been writing on high-throughput screening and diversity-oriented synthesis (DOS). As mentioned here before, I have trouble getting my head around the whole DOS concept, so perhaps that's the root of my problems with this latest paper. In many ways, it's a companion to one that was published earlier this year in JACS. In that paper, he made the case that natural products aren't quite the right fit for drug screening, which fit with an earlier paper that made a similar claim for small-molecule collections. Natural products, the JACS paper said, were too optimized by evolution to hit targets that we don't want, while small molecules are too simple to hit a lot of the targets that we do. Now comes the latest pitch.
In this PNAS paper, Schreiber's crew takes three compound collections: 6,152 small commercial molecules, 2,477 natural products, and 6,623 from academic synthetic chemistry (with a preponderance of DOS compounds), for a total of 15, 252. They run all of these past a set of 100 proteins using their small-molecule microarray screening method, and look for trends in coverage and specificity. What they found, after getting rid of various artifacts, was that about 3400 compounds hit at least one protein (and if you're screening 100 proteins, that's a perfectly reasonable result). But, naturally, these hits weren't distributed evenly among the three compound collections. 26% of the academic compounds were hits, and 23% of the commercial set, but only 13% of the natural products.
Looking at specificity, it appears that the commercial compounds were more likely, when they hit, to hit six or more different proteins in the set, and the natural products the least. Looking at it in terms of compounds that hit only one or two targets gave a similar distribution - in each case, the DOS compounds were intermediate, and that turns out to be a theme of the whole paper. They analyzed the three compound collections for structural features, specifically their stereochemical complexity (chiral carbons as a per cent of all carbons) and shape complexity (sp3 carbons as a percent of the whole). And that showed that the commercial set was biased towards the flat, achiral side of things, while the natural products were the other way around, tilted toward the complex, multiple-chiral-center end. The DOS-centric screening set was right in the middle.
The take-home, then, is similar to the other papers mentioned above: small molecule collections are inadequate, natural product collections are inadequate: therefore, you need diversity-oriented synthesis compounds, which are just right. I'll let Schreiber sum up his own case:
. . .Both protein-binding frequencies and selectivities are increased among compounds having: (i) increased content of sp3-hybridized atoms relative to commercial compounds, and (ii) intermediate frequency of stereogenic elements relative to commercial (low frequency) and natural (high frequency) compounds. Encouragingly, these favorable structural features are increasingly accessible using modern advances in the methods of organic synthesis and commonly targeted by academic organic chemists as judged by the compounds used in this study that were contributed by members of this community. On the other hand, these features are notably deficient in members of compound collections currently widely used in probe- and drug-discovery efforts.
But something struck me while reading all this. The two metrics used to characterize these compound collections are fine, but they're also two that would be expected to distinguish them thoroughly - after all, natural products do indeed have a lot of chiral carbons, and run-of-the-mill commercial screening sets do indeed have a lot of aryl rings in them. There were several other properties that weren't mentioned at all, so I downloaded the compound set from the paper's supporting information and ran it through some in-house software that we use to break down such things.
I can't imagine, for example, evaluating a compound collection without taking a look at the molecular weights. Here's that graph - the X axis is the compound number, Y-axis is weight in Daltons:
The three different collections show up very well this way, too. The commercial compounds (almost every one under 500 MW) are on the left. Then you have that break of natural products in the middle, with some real whoppers. And after that, you have the various DOS libraries, which were apparently entered in batches, which makes things convenient.
Notice, for example that block of them standing up around 15,000 - that turns out to be the compounds from this 2004 Schreiber paper, which are a bunch of gigantic spirooxindole derivatives. In this paper, they found that this particular set was an outlier in the academic collection, with a lot more binding promiscuity than the rest of the set (and they went so far as to analyze the set with and without it included). The earlier paper, though, makes the case for these compounds as new probes of cellular pathways, but if they hit across so many proteins at the same time, you have to wonder how such assays can be interpreted. The experiments behind these two papers seem to have been run in the wrong order.
Note, also, that the commercial set includes a lot of small compounds, even many below 250 MW. This is down in the fragment screening range, for sure, and the whole point of looking at compounds of that molecular weight is that you'll always find something that binds to some degree. Downgrading the commercial set for promiscuous binding when you set the cutoffs that low isn't a fair complaint, especially when you consider that the DOS compounds have a much lower proportion of compounds in that range. Run a commercial/natural product/DOS comparison controlled for molecular weight, and we can talk.
I also can't imagine looking over a collection and not checking logP, but that's not in the paper, either. But here you are:
In this case, the natural products (around compound ID 7500) are much less obvious, but you can certainly see the different chemical classes standing out in the DOS set. Note, though, that those compounds explore high-logP regions that the other sets don't really touch.
How about polar surface area? Now the natural products really show their true character - looking over the structures, that's because there are an awful lot of polysaccharide-containing things in there, which will run your PSA up faster than anything:
And again, you can see the different libraries in the DOS set very clearly.
So there are a lot of other ways to distinguish these compounds, ways that (to be frank) are probably much more relevant to their biological activity. Just the molecular-weight one is a deal-breaker for me, I'm afraid. And that's before I start looking at the structures in the three collections at all. Now, that's another story.
I have to say, from my own biased viewpoint, I wouldn't pay money for any of the three collections. The natural product one, as mentioned, goes too high in molecular weight and is too polar for my tastes. I'd consider it for antibiotic drug discovery, but with gritted teeth. The commercial set can't make up its mind if it's a fragment collection or not. There are a bunch of compounds that are too small even for my tastes in fragments - 4-methylpyridine, for example. And there are a lot of ugly functional groups: imines of beta-napthylamine, which should not even get near the front door (unstable fluorescent compounds that break down to a known carcinogen? Return to sender). There are hydroxylamines, peroxides, thioureas, all kinds of things that I would just rather not spend my time on.
And what of the DOS collection? Well, to be fair, not all of it is DOS - there are a few compounds in there that I can't figure out, like isoquinoline, which you can buy from the catalog. But the great majority are indeed diversity-oriented, and (to my mind), diversity-oriented to a fault. The spirooxindole library is probably the worst - you should see the number of aryl rings decorating some of those things; it's like a fever dream - but they're not the only offenders in the "Let's just hang as many big things as we can off this sucker" category. Now, there are some interesting and reasonable DOS compounds in there, too, but there are also more endoperoxides and such. (And yes, I know that there are drug structures with endoperoxides in them, but damned few of them, and art is long while life is short). So no, I wouldn't have bought this set for screening, either; I'd have cherry-picked about 15 or 20% of it.
Summary of this long-winded post? I hate to say it, but I think this paper has its thumb on the scale. I'm just around the corner from the Broad Institute, though, so maybe a rock will come through my window this afternoon. . .
The same paper I was summarizing the other day has some interesting data on the 1998-2007 drug approvals, broken down by country and region of origin. The first thing to note is that the distribution by country tracks, quite closely, the corresponding share of the worldwide drug market. The US discovered nearly half the drugs approved during that period, and accounts for roughly that amount of the market, for example. But there are two big exceptions: the UK and Switzerland, which both outperform for their size.
In case you're wondering, the league tables look like this: the US leads in the discovery of approved drugs, by a wide margin (118 out of the 252 drugs). Then Japan, the UK and Germany are about equal, in the low 20s each. Switzerland is in next at 13, France at 12, and then the rest of Europe put together adds up to 29. Canada and Australia put together add up to nearly 7, and the entire rest of the world (including China and India) is about 6.5, with most of that being Israel.
But while the US may be producing the number of drugs you'd expect, a closer look shows that it's still a real outlier in several respects. The biggest one, to my mind, comes when you use that criterion for innovative structures or mechanisms versus extensions of what's already been worked on, as mentioned in the last post. Looking at it that way, almost all the major drug-discovering countries in the world were tilted towards less innovative medicines. The only exceptions are Switzerland, Canada and Australia, and (very much so) the US. The UK comes close, running nearly 50/50. Germany and Japan, though, especially stand out as the kings of follow-ons and me-toos, and the combined rest-of-Europe category is nearly as unbalanced.
What about that unmet-medical-need categorization? Looking at which drugs were submitted here in the US for priority review by the FDA (the proxy used across this whole analysis), again, the US-based drugs are outliers, with more priority reviews than not. Only in the smaller contributions from Australia and Canada do you see that, although Switzerland is nearly even. But in both these breakdowns (structure/mechanism and medical need) it's the biotech companies that appear to have taken the lead.
And here's the last outlier that appears to tie all these together: in almost every country that discovered new drugs during that ten-year period, the great majority came from pharma companies. The only exception is the US: 60% of our drugs have the fingerprints of biotech companies on them, either alone or from university-derived drug candidates. In very few other countries do biotech-derived drugs make much of a showing at all.
These trends show up in sales as well. Only in the US, UK, Switzerland, and Australia did the per-year-sales of novel therapies exceed the sales of the follow-ons. Germany and Japan tend to discover drugs with higher sales than average, but (as mentioned above) these are almost entirely followers of some sort.
Taken together, it appears that the US biotech industry has been the main driver of innovative drugs over the past ten years. I don't want to belittle the follow-on compounds, because they are useful. (As pointed out here before, it's hard for one of those compounds to be successful unless it really represents some sort of improvement over what's already available). At the same time, though, we can't run the whole industry by making better and better versions of what we already know.
And the contributions of universities - especially those in the US - has been strong, too. While university-derived drugs are a minority, they tend to be more innovative, probably because of their origins in basic research. There's no academic magic involved: very few, if any, universities try deliberately to run a profitable drug-discovery business - and if any start to, I confidently predict that we'll see more follow-on drugs from them as well.
Discussing the reasons for all this is another post in itself. But whatever you might think about the idea of American exceptionalism, it's alive in drug discovery.
We can now answer the question: "Where do new drugs come from?". Well, we can answer it for the period from 1998 on, at any rate. A new paper in Nature Reviews Drug Discovery takes on all 252 drugs approved by the FDA from then through 2007, and traces each of them back to their origins. What's more, each drug is evaluated by how much unmet medical need it was addressed to and how scientifically innovative it was. Clearly, there's going to be room for some argument in any study of this sort, but I'm very glad to have it, nonetheless. Credit where credit's due: who's been discovering the most drugs, and who's been discovering the best ones?
First, the raw numbers. In the 1997-2005 period, the 252 drugs break down as follows. Note that some drugs have been split up, with partial credit being assigned to more than one category. Overall, we have:
58% from pharmaceutical companies.
18% from biotech companies..
16% from universities, transferred to biotech.
8% from universities, transferred to pharma.
That sounds about right to me. And finally, I have some hard numbers to point to when I next run into someone who tries to tell me that all drugs are found with NIH grants, and that drug companies hardly do any research. (I know that this sounds like the most ridiculous strawman, but believe me, there are people - who regard themselves as intelligent and informed - who believe this passionately, in nearly those exact words). But fear not, this isn't going to be a relentless pharma-is-great post, because it's certainly not a pharma-is-great paper. Read on. . .
Now to the qualitative rankings. The author used FDA priority reviews as a proxy for unmet medical need, but the scientific innovation rating was done basically by hand, evaluating both a drug's mechanism of action and how much its structure differed from what had come before. Just under half (123) of the drugs during this period were in for priority review, and of those, we have:
46% from pharmaceutical companies.
30% from biotech companies.
23% from universities (transferred to either biotech or pharma).
That shows the biotech- and university-derived drugs outperforming when you look at things this way, which again seems about right to me. Note that this means that the majority of biotech submissions are priority reviews, and the majority of pharma drugs aren't. And now to innovation - 118 of the drugs during this period were considered to have scientific novelty (46%), and of those:
44% were from pharmaceutical companies.
25% were from biotech companies, and
31% were from universities (transferred to either biotech or pharma).
The university-derived drugs clearly outperform in this category. What this also means is that 65% of the pharma-derived drugs get classed as "not innovative", and that's worth another post all its own. Now, not all the university-derived drugs showed up as novel, either - but when you look closer, it turns out that the majority of the novel stuff from universities gets taken up by biotech companies rather than by pharma.
So why does this happen? This paper doesn't put it one word, but I will: money. It turns out that the novel therapies are disproportionately orphan drugs (which makes sense), and although there are a few orphan-drug blockbusters, most of them have lower sales. And indeed, the university-to-pharma drugs tend to have much higher sales than the university-to-biotech ones. The bigger drug companies are (as you'd expect) evaluating compounds on the basis of their commercial potential, which means what they can add to their existing portfolio. On the other hand, if you have no portfolio (or have only a small one) than any commercial prospect is worth a look. One hundred million dollars a year in revenue would be welcome news for a small company's first drug to market, whereas Pfizer wouldn't even notice it.
So (in my opinion) it's not that the big companies are averse to novel therapies. You can see them taking whacks at new mechanisms and unmet needs, but they tend to do it in the large-market indications - which I think may well be more likely to fail. That's due to two effects: if there are existing therapies in a therapeutic area, they probably represent the low-hanging fruit, biologically speaking, making later approaches harder (and giving them a higher bar to clear. And if there's no decent therapy at all in some big field, that probably means that none of the obvious approaches have worked at all, and that it's just a flat-out hard place to make progress. In the first category, I'm thinking of HDL-raising ideas in cardiovascular and PPAR alpha-gamma ligands for diabetes. In the second, there are CB1 antagonists for obesity and gamma-secretase inhibitors in Alzheimer's (and there are plenty more examples in each class). These would all have done new things in big markets, and they've all gone down in expensive flames. Small companies have certainly taken their cuts at these things, too, but they're disproportionately represented in smaller indications.
There's more interesting stuff in this paper, particularly on what regions of the world produce drugs and why. I'll blog about again, but this is plenty to discuss for now. The take-home so far? The great majority of drugs come from industry, but the industry is not homogeneous. Different companies are looking for different things, and the smaller ones are, other things being equal, more likely to push the envelope. More to come. . .
Via Avik Roy at Forbes, there's news of a deal between Pfizer and Washington University at St. Louis. The company is giving the university "unprecedented access" to what they say is a list of more than 500 drugs and failed drug candidates, and letting them tear into them in an effort to find out what new uses there might be for both current and failed compounds.
“There are two realities in drug discovery,” explains Don Frail, chief scientific officer of Pfizer’s Indications Discovery Unit. “The majority of candidates tested in development do not give the desired result, yet those drugs that do succeed typically have multiple uses. By harnessing the scientific expertise at this leading academic medical center, the collaboration seeks to discover entirely new uses for these compounds in areas of high patient need that might otherwise be left undiscovered.”
Pfizer's paying Wash. U. $22.5 million as well, which will be well worth it if a single good repurposing idea comes out of the collaboration. Pfizer (or any large drug company) can run through twenty million dollars of expenses on its own without a qualm, so this deal should be no problem. These compounds seem to already have had a lot of work done on them, and will thus have a shorter path through development if something turns up.
I've no idea what the chances of that are, of course - probably not all that great, but it's impossible to be sure about that. I do like the idea of letting a completely different set of eyes go over things, though. One of the biggest problems in a large organization is group-think. People get convinced that something is a hot area because other people seem convinced that it's a hot area, and the same holds true for getting convinced that something's not worth working on.
Look at the way Pfizer convinced itself that Exubera (inhaled insulin) was going to be a huge success, when it was actually a major disaster. On a smaller scale, that sort of thing happens all the time, all over the industry. Projects and ideas rise and fall only partly on their scientific merit - the drug labs are still staffed by human beings, and we're susceptible to all the biases and errors that everyone else is. And it's not like the Washington U people won't have their own biases, but theirs will at least be different.
That brings me back to one of the many reasons that I don't like giant drug company mergers. I think that we need as many different sets of eyes looking at our problems as we can get. The more shots get taken, from all sorts of angles, the better the chance of hitting something. And a huge company, while it does have room for some differences inside it, tends to homogenize viewpoints. The One Big Project with its One Big Compound will get the resources for a given area in the end. It's like when a multiplex theater opens in a smaller town - they tell everyone that with 16 screens they'll be able to bring in movies that otherwise never would play there. But come July, all sixteen screens are probably showing Revenge of the MegaSequel Part II, just to make sure that one's starting every twenty minutes.
I kept meaning to write last week about GlaxoSmithKline's decision to open up a database of possible lead compounds against malaria. These were hits from a larger screen that the company ran, and been made unusually public. (Here's the press release as a PDF). There are about 13,500 structures, apparently. The company is to be commended for doing this, naturally, but I wish that the press coverage would emphasize a few things that it hasn't so far.
For one, these are not antimalarial compounds, at least not to a medicinal chemist. Some of them might be, but for now, they're all potential antimalarials, with a long, long way to go. This is all in what most drug discovery organizations call the "hit to lead" stage. Some of these compounds may well be screening artifacts. Others will turn out to work through mechanisms that won't be useful - they'll kill malaria parasites, but they'll kill lots of other things, too. Some of them will hit other targets that aren't quite as severe, but will still be enough to make them undesirabel. And many others will be too weak to be useful as they are, and turn out, after investigation, to have no clear path forward to making them more potent. And so on.
The most interesting compounds still have a long road ahead. What are their blood levels after various sorts of dosing? Which of those dosage forms are the best - the most reliable, the easiest to make, the most stable on storage? What metabolites do the compounds form in vivo, and what do those do? What long-term toxic effects might they have? How susceptible are they to resistance on the part of the parasites? On top of all these questions are the big ones, about how well these potential drugs knock down malaria under real-world conditions.
This, in short, is what drug development is all about, and it would be good to see some of this brought out in the press coverage. This is what I (and many of the readers of this site) do for a living, and it's enough to occupy all our time with plenty left over. If you can do this sort of thing, you're a drug company, and I'm always looking for opportunities to tell people just what it is that drug companies do and to move people past the evil-pharma versus saintly-university mindset. Nature has it right in their editorial:
Meanwhile, universities and other academic institutions should do more to support and reward the sort of translational research required to develop drug leads such as those offered by GSK — even though that work usually does not result in high-profile, breakthrough research papers. In addition, such translational activities provide a means for universities to contribute to public–private partnerships such as the MMV, the Drugs for Neglected Diseases Initiative and the Institute for OneWorld Health.
Universities also have another part to play. Their often aggressive intellectual-property policies can stymie research and development in neglected diseases — they should ensure that their licensing deals with companies make exceptions for royalty-free use of technologies for good causes. That change, too, is beginning to happen — although, when it comes to hogging intellectual property, academics and their institutions are often among the worst offenders. . .
There's an article in the Chronicle of Higher Education that's been getting a lot of recent attention. It's titled "Grad School in the Humanities: Just Don't Go". The author, clearly (and to my mind, justifiably) embittered about what he sees happening, is an associate professor of English who sees no need to produce a huge surplus of people who want to go on to become associate professors of English.
Some of his warnings don't apply to the sciences. The biggest difference is that there have always been many more places to find work with a science degree other than academia, which is not so true if you've concentrated your graduate studies on the life of Rainer Maria Rilke. Another key factor is that we don't generally come out of grad school with academic debts. To be sure, a Rilke scholar would learn an awful lot about sponging money off wealthy people, but there's that pesky poetic talent problem to be dealt with before you can put those techniques into practice. . .
Of course, these days the jobs aren't exactly coming so readily for new science graduates, although we're still in better shape than anyone over in the humanities. A lot of people are rethinking grad school, though, if the mail I get is any indication. For what it's worth, I offer the Chronicle author's list of bad reasons why people take on graduate study in the humanities - let's take a look and see how many apply to the sciences. I'm going to number them for easy reference:
(1) They are excited by some subject and believe they have a deep, sustainable interest in it. (But ask follow-up questions and you find that it is only deep in relation to their undergraduate peers — not in relation to the kind of serious dedication you need in graduate programs.)
(2) They received high grades and a lot of praise from their professors, and they are not finding similar encouragement outside of an academic environment. They want to return to a context in which they feel validated.
(3) They are emerging from 16 years of institutional living: a clear, step-by-step process of advancement toward a goal, with measured outcomes, constant reinforcement and support, and clearly defined hierarchies. The world outside school seems so unstructured, ambiguous, difficult to navigate, and frightening.
(4) With the prospect of an unappealing, entry-level job on the horizon, life in college becomes increasingly idealized. They think graduate school will continue that romantic experience and enable them to stay in college forever as teacher-scholars.
(5) They can't find a position anywhere that uses the skills on which they most prided themselves in college. They are forced to learn about new things that don't interest them nearly as much. No one is impressed by their knowledge of Jane Austen. There are no mentors to guide and protect them, and they turn to former teachers for help.
(6) They think that graduate school is a good place to hide from the recession. They'll spend a few years studying literature, preferably on a fellowship, and then, if academe doesn't seem appealing or open to them, they will simply look for a job when the market has improved. And, you know, all those baby boomers have to retire someday, and when that happens, there will be jobs available in academe.
Reason #1 is probably common, to some degree, across all academic fields. Graduate school is, in fact, largely about finding out whether you have enough dedication to get through graduate school (and is used as a credentialing signal for that very reason). Reason #2 also probably happens to some extent everywhere, but in science research programs there often aren't any grades after the first year. You have to get your validation from getting good ideas and getting your research to work, with is the same situation that obtains in the real world of science.
Reasons #3 and #4 are actually some of the things that keep people in grad school too long. Though the environment can be odd and stressful, you come to feel at home in it, and worry about going to some new situation where you won't have a place that you've made for yourself. Everyone in the sciences has known people in grad school who've stalled out for just these reasons.
Reason #5 doesn't apply as much for the sciences, I'd say. The kinds of jobs available to someone with just an undergraduate degree are often much different than the ones open to people with graduate training. And the material that you learn in grad school is much like what you started to learn as an undergraduate, just more of it and in more detail. The biggest change is in actually applying it to real research, instead of just learning it and doing well on a written test about it. That's another transition that throws some people out of a scientific career.
But reason #6 would definitely seem to apply, both for academic and industrial jobs. I'd have to think that we have a lot of people who are taking a bit longer to finish their PhDs than they might have otherwise, and a lot of people looking for post-docs who might otherwise not have done one, while they wait for the job market to improve. . .
Continuing Education (CE) is a big issue in many medical fields and those associated with them. Licensing boards and professional societies often require proof that people are keeping up with current developments and best practices, which is a worthy goal even if arguments develop over how well these systems work.
And it's also been a battleground for fights over commercial conflicts of interest. On the one hand, no one needs a situation where a room full of practitioners sits down to a blatant sales pitch that nonetheless counts as continuing education. But one the other hand, you have the problem that's now developing thanks to new policies by the Accreditation Council for Continuing Medical Education (ACCME) and the Accreditation Council for Pharmacy Education (ACPE). Thanks to a reader, I'm reproducing below some key parts of a letter that one professional organization, the American Society for Clinical Pharmacology and Therapeutics, has recently sent out to its members:
In 2006, ACCME and ACPE adopted new accreditation policies that went into effect in January 2009. Most concerning of these new policies is the requirement that CE providers develop activities/education interventions independent of any commercial interest, including presentation by industry scientists. This requirement greatly impacts the Society as industry scientists constitute nearly 50% of our membership and contribute significantly to the scientific programming of the ASCPT Annual Meeting. . .
ASCPT has been left with two options: 1) stop providing CE credit and continue to involve scientists from industry in the scientific program of the Annual Meeting; or 2) continue providing CE credit and remove all industry scientists from the program and planning process. . .
They go on to say that this year's meeting, having already been planned in the presence of Evil Industry Contaminators (well, they don't quite say it like that), will have no CE component, and that they don't see how they'll be able to have any such in the future, since they can't very well keep half the membership from presenting their work. This is definitely a problem for a number of professional organization, particularly the ones that deal with clinical research. They intersect with the professions that tend to have continuing education requirements, but a significant part of the expertise in their fields is found in industry. The ASCPT is not the only society facing this same dilemma.
It looks as if the accreditation groups decided that they were faced with a choice: commit themselves to judging what sorts of presentations should count for CE credit (which you might think was their job), or just toss out anything that has any connection with industry. That way you can look virtuous and save time, too. My apologies if I'm descending into ridicule here, but as an industrial scientist I find myself resenting the implication that my hands (and those of every single one of my colleagues) are automatically considered too dirty to educate any practicing professionals.
To be fair, this could well be one of those situations that the industry has helped bring on itself. I've no doubt that the CME process has probably been abused in the past. (Update: see the comments section. Am I being too delicate in this phrasing? Probably comes from never having dealt much with the marketing side of the business. . .) But there has to be some way to distinguish the old-fashioned "golf-resort meeting" from a clinical pharmacologist delivering a paper on new protocols for trial designs. The last thing we need is to split the scientific community even more than it's split already.
A comment to yesterday's post made a point that seemed instantly familiar, but it's one that my own thoughts had never quite put together. All of us who do medicinal chemistry came out of academic labs; that's where you get the degrees you need to have to be hired. Many of us worked on the synthesis of complex molecules for those degrees, since that's traditionally been a preferred base for drug companies to hire from. (You get a lot of experience of different kinds of reactions that way, have to deal with setbacks and adversity, and have to learn to think for yourself. Plus, if you can put up with some of the people who do natural products synthesis, the thinking goes, you can put up with anything).
Here's the interesting part, though. People who do the glass-filament spiderweb-sculpture work that is total natural product synthesis will defend it on many grounds (some more defensible than others, in my view). They have, naturally enough, a bias in favor of that kind of work. But have those of us who've done that kind of chemistry and then moved on to industry ended up with the opposite bias? Have we reacted against the forced-march experience of some of our early training by resolving never to get stuck in such a situation again (which is reasonable), but at the same time resolved never to get stuck doing fancy synthesis again?
That one may not be so reasonable. And I don't mean that we avoid twenty-step syntheses for irrational reasons, because there are perfectly rational reasons for fleeing from such things in industrial work. But this bias might extend further. Take a workhorse reaction like palladium-catalyzed coupling - that's just what people tend to think of when they think of uninspiring industrial organic synthesis, two or three lumpy heteroaromatics stuck together with Suzuki couplings, yawn. One of my colleagues, though, recently mentioned that he saw too many people sticking with rather primitive conditions for such reactions and taking their 50% yields (and cleanup problems) as just the normal course of events. And he's got a point, I'd say. There really are better conditions to use as your default Pd coupling mixture than the ones from the mid-1990s. You don't have to always clean all the red-brown gunk out from your product after using (dppf) as your phosphine ligand, and good ol' tetrakis is not always the reagent of choice. But a lot of people just take the standard brew, throw their starting materials in there, and bang 'em together. Crank up the microwave some more if it doesn't work.
I can see how this happens. After all, the big point that people have to learn when they join a drug research effort is that chemistry is not an end in itself - it's a tool to make compounds for another end entirely. If you're just making analogs in the early stages of a new project, no one's going to care much if your yields are low, because the key thing is that you made the compounds. I've said myself (many times) that there are two yield in medicinal chemistry: enough, and not enough. Often, perhaps a little too often, five milligrams qualifies as "enough", which means that you can check off a box through some really brutal chemistry.
But at the same time, if you could make simple changes to your reaction conditions, or to the kinds of reactions you tend to run, you could potentially make more compounds (because you're not spending so much time cleaning them up), make them in higher yields (or make your limited amount of starting material stretch further), or make more interesting (and patentable) ones, too. I think that too many of us do tend to get stuck in synthetic ruts of various sorts.
Perhaps the main cause of this is the pressure of normal drug discovery work. But I do have to wonder if some of the problem is a bit of aversion to the latest, hottest reagent or technique coming out of the academic labs. To be sure, a lot of that stuff isn't so useful out here in what it pleases us to call the real world. But there are a lot of things we could stand to learn, as well. Palladium couplings used to be considered kind of out-there, too, you know. . .
(1) Bnet Pharma on "How Not to Write a Pharma Press Release". Privately held Epeius is sending out bulletins loaded with phrases like "more stunning results" and "Epeius Biotechnologies draws the sword of targeted gene delivery from the stone of chemistry and physics". If they were publicly traded, this would be fun to watch. . .
(2) The rise of Micropharma? We'll come back to this subject:
The drug discovery pipelines of the major pharmaceutical companies have become shockingly depleted, foreshadowing a potential crisis in the ability of Big Pharma to meet the pharmaceutical demands created by the ever-changing spectrum of human disease. However, from this major crisis is emerging a major opportunity, namely micropharma – academia-originated biotech start-up companies that are efficient, innovative, product-focused, and small. In this Feature, we discuss a “new ecosystem” for drug development, with high-risk innovation in micropharma leading to Big Pharma clinical trials. . .
(3) Cleaving amyloid precursor protein into beta-amyloid has long been thought (by many) to be the key pathological event in Alzheimer's. But what about the piece of APP that's left inside the cell?
(4) A favorite post around here for some time has been "Sand Won't Save You This Time", about the wonderfulness of chlorine trifluoride. Well, here's a method to produce very interesting-looking compounds that uses. . .bromine trifluoride. How much do you want these products, that's what you have to ask yourself. To be sure, the authors do mention that "Although commercial, bromine trifluoride is not a common reagent in every organic laboratory, and many chemists do not feel at ease with it because of its high reactivity. . .". You have to go to the Supporting Information file before you start hearing about freshly preparing the stuff from elemental fluorine.
There seems to be some finger-pointing going on about conflicts of interest in the scientific and medical literature. According to this piece in Nature Medicine, a recent conference in Vancouver on peer review featured statements such as this:
"We absolutely should not let up on our scrutiny of industry," says Karen Woolley, a co-author of one of the new studies and chief executive officer of the professional medical writing company ProScribe, based in Queensland, Australia. "But why are we always pointing our finger over there? There's an elephant in the room, and that's the nonfinancial conflicts of interest in academia."
I hope that ProScribe wasn't involved in that Australian journal scandal. But even though the head of a medical writing company clearly has a gigantic axe to grind here, the point isn't invalid. Academia has pressures of its own to publish, and lot of shaky stuff gets sent out under them.
Under the auspices of (the Council on Publishing Ethics), (consultant Liz) Wager dug through PubMed files to see how many papers had been retracted between 1988 and 2008. She found 529, and, in a close study of a randomly selected set of 312, she judged that only 28% were due to "honest error". Among the rest, some of the largest chunks were due to authors found publishing the same results more than once (18%), plagiarism (15%), fabrication (5%) and falsification (4%) of data. Taking into account an additional 1% in the 'other misconduct' category, the unethical reasons stacked up to 43%.
Many, perhaps most, of these papers seemed to have been unlikely to have been funded by industry. And there are, of course, plenty of rotten papers out there that never get retracted at all, in many cases because no one reads them or notices that they're a rehash of what someone else has already published. The Deja Vu people are starting to cut into that pile, though, and it's a big one.
There's a danger of all this turning into an exchange of tu quoque arguments between industry, academia, and the publishers. I think there's common ground to agree, though, that all sorts of pressures exist to publish work that shouldn't be published, and that everyone has a common interest in making sure that this doesn't happen. And industry still has a bigger responsibility, since (1) it has more money to cause trouble, if it wants to, and (2) the sorts of things it works on often have more immediate relevance to the outside world. If some obscure faculty member somewhere published reheated work in a series of low-end journals, he's only wasting the time of a limited number of people. A publication involving clinical trial data, though, can send ripples out a lot farther and faster.
There's a (justifiably) angry paper out in PLoS Biology discussing the nasty situation too many academic researchers find themselves in: spending all their time writing grant applications rather than doing research. The paper's written from a UK perspective, but the problems it describes are universal:
To expect a young scientist to recruit and train students and postdocs as well as producing and publishing new and original work within two years (in order to fuel the next grant application) is preposterous. It is neither right nor sensible to ask scientists to become astrologists and predict precisely the path their research will follow—and then to judge them on how persuasively they can put over this fiction. It takes far too long to write a grant because the requirements are so complex and demanding. Applications have become so detailed and so technical that trying to select the best proposals has become a dark art.
And a related problem is how this system tends to get rid of people who can't stand it, leaving the sorts of people who can:
The peculiar demands of our granting system have favoured an upper class of skilled scientists who know how to raise money for a big group . They have mastered a glass bead game that rewards not only quality and honesty, but also salesmanship and networking. A large group is the secret because applications are currently judged in a way that makes it almost immaterial how many of that group fail, so long as two or three do well. Data from these successful underlings can be cleverly packaged to produce a flow of papers—essential to generate an overlapping portfolio of grants to avoid gaps in funding.
Thus, large groups can appear effective even when they are neither efficient nor innovative. Also, large groups breed a surplus of PhD students and postdocs that flood the market; many boost the careers of their supervisors while their own plans to continue in research are doomed from the outset. . .
The author is no freshly-minted assistant professor - Peter Lawrence (FRS) has been at Cambridge for forty years, but only recently relocated to the Department of Zoology and experienced the grantsmanship game first-hand. He has a number of recommendations to try to fix the process: shorter and simpler application forms, an actual weighting against large research groups, longer funding periods, limits to the number of papers that can be added to a grant application, and more. Anyone interested in the topic should read the whole paper, and will probably be pounding on the desk in agreement very shortly.
The short version? We think we're asking for scientists, but we're really asking for fund-raisers and masters of paperwork. Surely it doesn't have to be this way.
My post the other day on why-do-it academic research has prompted quite a bit of comment, including this excerpt from an e-mail:
I would also note that mediocrity is hardly limited to academia. I cannot tell you the number of truly dumb things that I continue to see happening in industry, motivated by the need to be doing something - anything - that can be quantified in a report. The idea that industry is where reality takes command is depressingly false, and I would guess that the same thing that distinguishes the best from the rest in academia also applies in the "real world."
Well, my correspondent is unfortunately on target with that one. Industry is supposed to be where reality takes command, but too often it can be where wishful thinking gets funded with investor's cash. I'm coming up on my 20th anniversary of doing industrial drug discovery. I've seen a lot of good ideas and a lot of hard work done to develop them - but I've also seen decisions that were so stupid that they would absolutely frizz your hair. And I'm not talking stupid-in-hindsight, which is a roomy category we all have helped to fill up. No, these were head-in-hands performances while they were going on.
I can't go into great detail on these, as readers will appreciate, but I can extract some recurring themes. From what I've seen the worst decisions tend to come from some of these:
"We can't give up on this project now. Look at all the time and money we've put into it!" This is the sunk-cost fallacy, and it's a powerful temptation. Looking at how hard you've worked on something is, sadly, nearly irrelevant to deciding whether you should go on working on it. The key question is, what's it look like right now, compared to what else you could be doing?
"Look, I know this isn't the best molecule we've ever recommended to the clinic. But it's late in the year, and we need to make our goals." I think that everyone who's been in this business for a few years will recognize this one. It's a confusion of ends. Those numerical targets are set in an attempt to try to keep things moving, and increase the chance of delivering real drugs. That's the goal. But they quickly become ends in themselves, and there's where the trouble starts. People start making the numbers rather than making drugs.
"OK, this series of compounds has its problems. But how can you walk away from single-digit nanomolar activity?" This is another pervasive one. Too many discovery projects see their first job (not unreasonably) as getting a potent compound, and when they find one, it can be hard to get rid of it - even if it has all kinds of other liabilities. It takes a lot of nerve to get up in front of a project review meeting and say "Here's the series that lights up the in vitro assay like nothing else. And we're going to stop working on it, because it's wasting our time".
"Everyone else in the industry is getting on board with this. We've got to act now or be left behind." Sometimes these fears are real, and justified. But it's easy to get spooked in this business. Everyone else can start looking smarter than you are, particularly since you see your own discovery efforts from the inside, and can only see other ones through their presentations and patents. Everyone looks smart and competent after the story has been cleaned up for a paper or a poster. And while you do have to keep checking to make sure that you really are keeping up with the times, odds are that if you're smart enough to realize that you should be doing that, you're in reasonably good shape. The real losers, on the other hand, are convinced that they're doing great.
I'm not sure how many of these problems can be fixed, ours or the ones of academia, because both areas are stocked with humans. But that doesn't mean we can't do better than we're doing, and it certainly doesn't release us from an obligation to try.
I was looking through my RSS feed of journal articles this morning, and came across this new one in J. Med. Chem.. Now, there's nothing particularly unusual about this work. The authors are exploring a particular subtype of serotonin receptor (5-HT6), using some chemotypes that have been looked at in serotinergic ligands before. They switch the indole to an indene, put in a sulfonamide, change the aminoethyl side chain to a guanidine, and. . .wait a minute.
Guanidine? I thought that the whole point of making a 5-HT6 ligand was to get it into the brain, and guanidines don't have the best reputation for allowing you to do that. (They're not the easiest thing in the world to even get decent oral absorption from, either, come to think of it). So I looked through the paper to see if there were any in vivo numbers, and as far as I can see, there aren't.
Now, that's not necessarily the fault of the paper's authors. They're from an academic med-chem lab in Barcelona, and animal dosing (and animal PK measurements) aren't necessarily easy to get unless you have a dedicated team that does such things. But, still. The industrial medicinal chemist in me looks at these structures, finds them unlikely to ever reach their intended site of action, can find no evidence in the paper's references that anyone else has ever gotten such a guanidine hydrazone into the brain, either, and starts to have if-a-tree-falls-in-the-forest thoughts.
Now, it's true that we learn some more about the receptor itself by finding new ligands for it, and such compounds can be used for in vitro experiments. But it's not like there aren't other 5-HT6 antagonists out there, in several different chemical classes, and that's just from the first page of a PubMed search. Many of these compounds do, in fact, penetrate the brain, because they were developed by industrial groups for whom in vitro experiments are most definitely not an end in themselves.
I don't mean to single out the Barcelona group here. Their work isn't bad, and it looks perfectly reasonable to me. It's just that my years in industry have made me always ask what a particular paper tells me that I didn't know, and what use might some day be made of the results. Readers here will know that I have a weakness for out-there ideas and technologies, so it's not like I have to see an immediate practical application for everything. But I would like to see the hope of one. And for this work, and for a lot of medicinal chemistry that comes out of academic labs, I just don't see it.
Update: it's been pointed out in the comments that there's a value in academic work that doesn't have to be addressed in industry, that is, training the students who do it. That's absolutely right. But at the same time, couldn't people be trained just as well by working on systems that are a bit less dead on arrival?
And no, I'm not trying to make that case that academic labs should make drugs. If they want to try, then come on down. If they don't, that's fine, too - there's a lot of important research to be done in the world that has no immediate practical application. But this sort of paper that I've written about today seems to miss both of these boats simultaneously: it isn't likely to produce a drug, and it doesn't seem to be addressing any other pressing needs that I can see, either.
And yes, I could say the same about my own PhD work. "The world doesn't need another synthesis of a macrolide antibiotic", I told people at the time. "But I do". Does it have to be like that?
I know that many people are getting tired of this topic. But many people who work in the industry have never met someone who's convinced that drug companies are just standing in the way of innovation, and that all the good stuff comes from the NIH, anyway. So allow me a couple of quick quotes from Dr. Jerry Avorn, chief of pharmacoepidemiology at Boston's Brigham and Women's Hospital, and (thus) a person who should know better:
". . .Virtually every progressive recommendation about health policy for the last 20 or 30 years that the drug industry felt might harm its bottom line has been met by the threat that if they don't make as much money before, innovation will cease and there will be no cures for new diseases. It came up around Medicare drug pricing and generic drugs. It's not a surprise to see it come up around health-care reform.
There are a couple reasons that this is a specious argument. One is that according to their filings with the SEC, the drug companies only spend about 15 cents of every dollar on research and development. That's compared to more than 30 cents in administration and marketing and more than 20 cents on shareholder equity. As an investment in R&D, I think any venture capitalist would say a company spending 15 percent on research is not a robust innovation engine.
The second issue is that if one looks at the new pipeline of drugs that Pharma has been generating in recent years, it's been puny. Wall Street has noticed this as well. There have been 20 or fewer drugs approved by FDA in recent years, which is lower than in past periods. It's sort of an open secret that innovation isn't working that efficiently.
The third leg of the stool is that if you really trace back where the seminal discoveries come from on which new drugs are based, it is federally supported research, usually funded by the National Institute of Health, and frequently conducted at universities or academic medical centers. The drug companies will then identify these discoveries and do hard, costly, and important work commercializing them. And they deserve compensation for that work. But it's disingenuous for them to imply that all the discoveries occur in their walls.. . ."
Read the rest of the interview if you want to hear how we'd all be better off if everything turned into biotech start-ups. But you say that you thought those were companies, too, and weren't funded by NIH money, but rather by investors who are often hoping for a deal with a big drug company? Adjust your thinking! This last quote should help you:
". . .if we want innovation and scientific discovery we should fund innovation and scientific discovery, not go after it bass-ackwards by paying too much for overpriced drugs and hoping that some of the excess profit will trickle down into innovative research. If I'm right that a lot of the important and useful innovation comes from NIH studies, then the way to get more innovation is to fund innovation. It frankly would be a far more interesting use of any given dollar one wanted to spend. . ."
Megan McArdle has done the work of attacking this at greater length than I can right now, and her post is a good palate-cleaning read after the Avorn interview. One tiny point she brings up that Dr. Avorn might want to internalize is that 15% is actually quite a large percentage of R&D spending. Apple spends 3%, and Google, 10%. Intel manages to get all the way up to 15%. At any rate, the whole post is worth reading, and was clearly written in a mood of complete exasperation. Which I share.
Now, while we've been talking about how much basic research is done in industry, or how much clinical research gets done in academia, here's something that might bear on the discussion. Too much of what looks like useful clinical research on the academic side is actually wasted effort. The New York Times has been running a series called "The Forty Year War", looking at the history of the "War on Cancer", and the latest installment is on clinical trials.
It's been a problem for some time now that there aren't enough patients to go around for many cancer trials. Breast cancer is an especially problematic area, last I heard. It's high-profile, fairly high-incidence, and a lot of investigational anticancer agents are lined up to take a whack at it. So many, in fact, that there aren't enough breast cancer patients available in the US, nowhere near, and the same situation obtains in a number of other areas.
Much of this problem comes from low recruitment rates. As the Times article makes clear, only three per cent of adult cancer patients are enrolled in any kind of trial at all. Many cancer patients want to stick with the best therapy that's currently known, and don't want to add any uncertainty to what they're already dealing with. It's hard to blame them, but that does make the state of the art advance more slowly.
Another factor that may come as a surprise is that many oncology practices find that they lose money by participating in trials. The reimbursement-to-paperwork ratio doesn't always come out very well, especially for centers that don't do a lot of clinical research and haven't been able to streamline the process as much as possible. When they look at the number of patients that they can serve, given the time that's taken up, the trials start to make less sense.
Finally, and this is the least excusable factor on the list, there are many trials that really shouldn't be run at all. The Times does work in a line about how some studies by drug companies are just "designed to persuade doctors to use their drugs." My take on that is that these studies usually are designed to do that by showing that their drug actually works better, which is not such a bad thing. But note this other problem:
There are more than 6,500 cancer clinical trials seeking adult patients, according to clinicaltrials.gov, a trials registry. But many will be abandoned along the way. More than one trial in five sponsored by the National Cancer Institute failed to enroll a single subject, and only half reached the minimum needed for a meaningful result, Dr. Ramsey and his colleague John Scoggins reported in a recent review in The Oncologist.
Even worse, many that do get under way are pretty much useless, even as they suck up the few patients willing to participate. These trials tend to be small ones, at single medical centers. They may be aimed at polishing a doctor’s résumé or making a center seem at the vanguard of cancer care. But they are designed only to be “exploratory,” meaning that there are too few patients to draw conclusions or that their design is less than rigorous.
“Unfortunately, many patients who are well intentioned are in trials that really don’t advance the field very much,” said Dr. Richard Schilsky, an oncologist at the University of Chicago and immediate past president of the American Society of Clinical Oncology.
I don't want to dump a bucket of tar on all academic and publicly funded clinical research, because there's a lot of good stuff that goes on as well. (And remember, the publicly basic research is very valuable indeed). But the next time someone tells you about the number of clinical trials run outside of the drug industry, you might want to keep those above figures in mind.
Not all trials are created equal, not by a long shot. But the ones that we run in industry, from what I can see, tend to have a better chance of relevance. That's partly because we're spending our own money on them, and with a goal of finding drugs that people will spend money on in turn. It focuses one's efforts. It's not like we never waste money in this business, but I'm very much willing to bet that we waste it less often than happens with public funds. Companies trying to get an agent through the clinic tend not to set up meaningless trials just to make everyone's resume look better. That I can tell you.
I linked yesterday to a post by Megan McArdle about health care reform. And while I realize that everyone got into a shouting match in the comments to my own post on the subject - and people sure did in the comments to hers; it's endemic - I wanted to quote a section from her on drug discovery:
Advocates of this policy have a number of rejoinders to this, notably that NIH funding is responsible for a lot of innovation. This is true, but theoretical innovation is not the same thing as product innovation. We tend to think of innovation as a matter of a mad scientist somewhere making a Brilliant Discovery!!! but in fact, innovation is more often a matter of small steps towards perfection. Wal-Mart’s revolution in supply chain management has been one of the most powerful factors influencing American productivity in recent decades. Yes, it was enabled by the computer revolution–but computers, by themselves, did not give Wal-Mart the idea of treating trucks like mobile warehouses, much less the expertise to do it.
In the case of pharma, what an NIH or academic researcher does is very, very different from what a pharma researcher does. They are no more interchangeable than theoretical physicists and civil engineers. An academic identifies targets. A pharma researcher finds out whether those targets can be activated with a molecule. Then he finds out whether that molecule can be made to reach the target. Is it small enough to be orally dosed? (Unless the disease you’re after is fairly fatal, inability to orally dose is pretty much a drug-killer). Can it be made reliably? Can it be made cost-effectively? Can you scale production? It’s not a viable drug if it takes one guy three weeks with a bunsen burner to knock out 3 doses.
I don't think a lot of readers here will have a problem with that description, because it seems pretty accurate. True, we do a lot more inhibiting drug targets than we do activating them, because it's easier to toss a spanner in the works, but that's mostly just a matter of definitions. And this does pass by the people doing some drug discovery work in academia (and the people doing more blue-sky stuff in industry), but overall, it's basically how things are, plus or minus a good ol' Bunsen burner or two.
But not everyone's buying it. Take this response by Ben Domenech over at The New Ledger. We'd better hope that this isn't a representative view, and that the people who are trying to overhaul all of health care as quickly as possible have a better handle on how our end of the system works:
. . .But needless to say, this passage and the ones following it surprised me a great deal. Working at the Department of Health and Human Services provided me the opportunity to learn a good deal about the workings of the NIH, and I happen to have multiple friends who still work there — and their shocked reaction to McArdle’s description was stronger than mine, to say the least.
“McArdle clearly doesn’t understand what she’s writing about,” one former NIH colleague said today. “Where does she think Nobel prize winners in biomedical research originate, academic researchers or in Pharma? Our academic researchers run clinical trials and develop drugs. I’m not trying to talk down Pharma, which I’m a big fan of, but I don’t think anyone in the field could read what she wrote without laughing.”
Well, I certainly could make it through without a chuckle, and I'll have been doing drug discovery for twenty years this fall. So how does the guy from HHS think things go over here?
To understand how research is divided overall, consider it as three tranches: basic, translational, and clinical. Basic is research at the molecular level to understand how things work; translational research takes basic findings and tries to find applications for those findings in a clinical setting; and clinical research takes the translational findings and produces procedures, drugs, and equipment for use by and on patients. . .
. . .The truth, as anyone knowledgeable within the system will tell you, is that private companies just don’t do basic research. They do productization research, and only for well-known medical conditions that have a lot of commercial value to solve. The government funds nearly everything else, whether it’s done by government scientists or by academic scientists whose work is funded overwhelmingly by government grants.
Hmm. Well-known with a lot of commercial value. Now it's true that we tend to go after things with commercial value - it is a business, after all - but how well-known is Gaucher disease? Or Fabry disease? Mucopolysaccharidosis I? People who actually know something about the drug industry will be nodding their heads, though, because they'll have caught on that I'm listing off Genzyme's product portfolio (part of it, anyway), which is largely made up of treatments for such things. There ar many other examples. Believe me, if we can make money going after a disease, we'll give it a try, and there are a lot of diseases. (The biggest breakdown occurs not when a disease affects a smaller number of people, but when almost no one who has it can possibly pay for the cost of developing the treatment, as in many tropical diseases).
But even taking Domenech's three research divisions as given - and they're not bad - don't we in industry even get to do a little bit of translational research? Even sometimes some basic stuff? After all, in the great majority times when we start attacking some new target, there is no drug for it, you know. We have to express the protein in an active form, work up a reliable assay using it, screen our compound collections looking for a lead structure, then work on it for a few years to make new compounds that are potent, selective, nontoxic, practical to produce, and capable of being dosed in humans. (Oh, and they really should be chemical structures that no one's ever made or even speculated about before). All of that is "productization" research? Even when we're the first people to actually take a given target idea into the clinic at all?
That happens all the time, you know. The first project I ever worked on in this industry was a selective dopamine antagonist targeted for schizophrenia. We were the first company to take this particular subtype into the clinic, and boy, did we bomb big. No activity at all. It was almost as if we'd discovered something basic about schizophrenia, but apparently that can't be the case. Then I worked on Alzheimer's therapies, namely protease inhibitors targeting beta-amyloid production, and if I'm not mistaken, the only real human data on such things has come from industry. I could go on, and I will, given half a chance. But I hope that the point has been made. If it hasn't, then consider this quote, from here:
“. . .translational research requires skills and a culture that universities typically lack, says Victoria Hale, chief executive of the non-profit drug company the Institute for OneWorld Health in San Francisco, California, which is developing drugs for visceral leishmaniasis, malaria and Chagas' disease. Academic institutions are often naive about what it takes to develop a drug, she says, and much basic research is therefore unusable. That's because few universities are willing to support the medicinal chemistry research needed to verify from the outset that a compound will not be a dead end in terms of drug development."
The persistent confusion over what's done in industry and what's done in academia has been one of my biggest lessons from running this blog. The topic just will not die. A few years ago, I ended up writing a long post on what exactly drug companies do in response to the "NIH discovers all the drugs" crowd, with several follow-ups (here, here, and here). But overall, Hercules had an easier time with the Hydra.
Now, there is drug discovery in academia (ask Dennis Liotta!), although not enough of it to run an industry. Lyrica is an example of a compound that came right out of the university labs, although it certainly had an interesting road to the market. And the topic of academic drug research has come up around here many times over the last few years. So I don't want to act as if there's no contribution at all past basic research in academia, because that's not true at all. But neither is it the case that pharma just swoops in, picks up the wonder drugs, and decides what color the package should be.
But what really burns my toast is this part:
So Pharma is interested in making money as their primary goal — that should surprise no one. But they’re also interested in avoiding litigation. Suppose for a moment that Pharma produces a drug to treat one non-life threatening condition, and it’s a monetary success, earning profits measured in billions of dollars. But then one of their researchers discovers it might have other applications, including life-saving ones. Instead of starting on research, Pharma will stand pat. Why? Because it doesn’t make any business sense to go through an entire FDA approval process and a round of clinical trials all over again, and at the end of the day, they could just be needlessly jeopardizing the success of a multi-billion dollar drug. It makes business sense to just stand with what works perfectly fine for the larger population, not try to cure a more focused and more deadly condition.
Ummm. . .isn't this exactly what happened with Vioxx? Merck was trying to see if Cox-2 inhibitors could be useful for colon cancer, which is certainly deadly, and certainly a lot less common than joint and muscle pains. Why didn't Merck "stand pat"? Because they wanted to make even more money of course. They'd already spent some of the cash that would have to have been spent on developing Vioxx, and cancer trials aren't as long and costly as they are in some other therapeutic areas. So it was actually a reasonable thing to look into. If you're staying in the same dosing range, you're not likely to turn up tox problems that you didn't already see in your earlier trials. (That's where Merck got into real trouble, actually - the accusation was that they'd seen signs of Vioxx's cardiovascular problems before the colon cancer trial, but breezed past them). But you just might come up with a benefit that allows you to sell your drug to a whole new market.
And that might also explain why, in general, drug companies look for new therapeutic opportunities like this all the time with their existing drugs. In fact, sometimes we look for them so aggressively that we get nailed for off-label promotion. No, instead of standing pat, we get in trouble for just the opposite. Your patented drug is a wasting asset, remember, and your job is to make the absolute most of it while it's still yours. Closing your eyes to new opportunities is not the way to do that.
The thing is, Domenech's heart seems to be mostly in the right place. He just doesn't understand the drug industry, and neither do his NIH sources. Talking to someone who works in it would have helped a bit.
After yesterday's post on pathway patents, I figured that I should talk about high-throughput screening in academia. I realize that there are some serious endeavors going on, some of them staffed by ex-industry people. So I don't mean to come across as thinking that academic screening is useless, because it certainly isn't.
What is probably is useless for is enabling a hugely broad patent application like the one Ariad licensed. But the problem with screening for such cases isn't that the effort would come from academic researchers, because industry couldn't do it, either: Merck, Pfizer, GSK and Novartis working together probably couldn't have sufficiently enabled that Ariad patent; it's a monster.
It's true that the compound collections available to all but the very largest academic efforts don't compare in size to what's out there in the drug companies. My point yesterday was that since we can screen those big collections and still come up empty against unusual new targets (again and again), that smaller compound sets are probably at even more of a disadvantage. Chemical space is very, very large. The total number of tractable compounds ever made (so far) is still not a sufficiently large screening collection for some targets. That's been an unpleasant lesson to learn, but I think that it's the truth.
That said, I'm going to start sounding like the pointy-haired boss from Dilbert and say "Screen smarter, not harder". I think that fragment-based approaches are one example of this. Much smaller collections can yield real starting points if you look at the hits in terms of ligand efficiency and let them lead you into new chemical spaces. I think that this is a better use of time, in many cases, than the diversity-oriented synthesis approach, which (as I understand it) tries to fill in those new spaces first and screen second. I don't mind some of the DOS work, because some of it's interesting chemistry, and hey, new molecules are new molecules. But we could all make new molecules for the rest of our lives and still not color in much of the map. Screening collections should be made interesting and diverse, but you have to do a cost/benefit analysis of your approach to that.
I'm more than willing to be proven wrong about this, but I keep thinking that brute force is not going to be the answer to getting hits against the kinds of targets that we're having to think about these days - enzyme classes that haven't yielded anything yet, protein-protein interactions, protein-nucleic acid interactions, and other squirrely stuff. If the modelers can help with these things, then great (although as I understand it, they generally can have a rough time with the DNA and RNA targets). If the solution is to work up from fragments, cranking out the X-ray and NMR structural data as the molecules get larger, then that's fine, too. And if it means that chemists just need to turn around and generate fast targeted libraries around the few real hits that emerge, a more selective use of brute force, then I have no problem with that, either. We're going to need all the help we can get.
Sean Cutler, a biologist at UC-Riverside, is the corresponding author of a paper in a recent issue of Science. That’s always a good thing, of course, and people are willing to go to a lot of trouble to have something like that on their list of publications. But Cutler’s worried that too many scientists, especially academic ones are willing to do a bit too much for that kind of reward. He tells John Tierney at the New York Times that he approached this project differently:
” Instead of competing with my competitors, I invited them to contribute data to my paper so that no one got scooped. I figured out who might have data relating to my work (and who could get scooped) using public resources and then sent them an email. Now that I have done this, I am thinking: Why the hell isn’t everyone doing this? Why do we waste taxpayer money on ego battles between rival scientists? Usually in science you get first place or you get nothing, but that is a really inefficient model when you think about it, especially in terms of the consequences for people’s careers and training, which the public pays for. . .
. . .Obviously there is a balance between self and community interests, but as it stands there are very few metrics of scientific “niceness” and few ways to reward community-minded scientists (some grants consider “broader impact,” but that is not the same thing). What is even worse, is there are even fewer mechanisms for punishing selfish (sometimes horribly so) scientists. If it were their own money or private money they were spending on their research — fine, they can be as selfish as they want and hold others up. But 99 times out of 100, it’s not their money- it’s the public’s money and it drives me absolutely crazy that there is no meaningful oversight of behavior.
That brought in a flood of comments, and Teirney followed up a couple of days later. Addressing the general issue of scientific competition, which is where many of the comments took issue, Cutler added:
” I am in full favor of competition. My message is: Compete ethically. Sadly, there is a lot of unethical competition that goes on in science. This year alone, I have heard of cases that are the scientific equivalent of insider trading, where reviewers of important papers exploit their access to privileged data to gain unfair advantages in the “race” to the next big discovery. I have heard of researchers being ignored when they request published materials from scientists.
Not sending materials described in papers or exploiting privileged information is a clear violation of journal policies, but unethical behavior of this kind is common in science and is usually perpetrated with a proud smile in the name of “competition. . .”
Well, he’s right that this sort of thing goes on all the time in academia. I don’t know how many tales I’ve heard of pilfered grant application ideas, shady conduct when refereeing papers, and so on. To tell you the truth, though, you don’t see so much of that in industry, at least not in the discovery labs. It’s not that we’re just better human beings over here, mind you – it’s that the system doesn’t allow people to profit so much by that particular sort of conduct. Patent law is one big reason for that, as are the sheer number of lawyers that corporations can bring to bear on someone if they feel that they’ve been wronged. There’s more money involved, in every way, so the consequences of being caught are potentially ruinous.
Update: does this mean I've never worked with sleazeballs? Not at all! Credit-stealing and the like does happen in industrail research labs; they're staffed with humans. But direct theft of someone else's work - that's rare, because being inside an organization is the academic equivalent of being inside the same research group, and it's harder to get away with blatant theft. Academic lab vs. academic lab, though, is more the equivalent of "company vs. company", and (at least in the researchstage of things) we have far fewer opportunities for chicanery in industry at that level.
Anyway, unethical conduct in industrial research, when it happens, tends to occur closer to the sources of the money – over in the marketing department, say, or perhaps regulatory affairs. In academia, grants are the source of money, with high-profile publications closely tied to them. The sharp operators naturally tend to concentrate there, like ants around honey.
Cutler’s proposed solution is to go right to that source:
My call to scientists, journals and granting agencies is this: What I’d like to see implemented are rewards for ethical behavior and consequences for unethical behavior. If you knew you might not get a grant funded because you had a track record of unethical practices, then you’d start behaving. It is not much more complicated than that. The journal Science has a “reviewer agreement” that bars the unsavory behavior I described above. After my discussion of the matter with Bruce Alberts, editor in chief of Science, it is clear to me that Science considers the matter very important, but that the journal currently lacks a written policy on the consequences for ethical violations of the reviewer agreement. Without clearly advertised consequences, why behave?
My take is that two issues are being mixed here, which is the same difficulty that led to Tierney having to address this story twice. The first issue is unethical behavior, and I’m with Cutler on that one. There’s too much of that stuff around, and the reason it doth prosper is that the risk/benefit ratio is out of whack. If there were stiffer (and more sure) consequences for such things, people would act on their underhanded impulses less frequently. And for the kinds of people who do these things, the only factors that really matter to are money and prestige, so hit ‘em there, where they can feel it.
But the second issue is competition versus cooperation, and that’s another story. Prof. Cutler’s points about wasting grant money don’t seem to me to necessarily have anything to do with unethical behavior. It’s true that holding back cell lines and the like is slimy, and does impede progress (and waste public money). But without going much further, you could talk about waste when you have multiple research groups working on the same problem, even when they’re all behaving well.
That’s what went on here, if I understand the situation. Cutler basically went out to several other groups who were pursuing the same thing (abscisic acid signaling) through different approaches, and said “Hey folks, why don’t we get together and form one great big research team, rather than beat each other up?” I certainly don’t think that he was expected these other labs to do something sleazy, nor was he trying to save them from temptation.
And the problem there is (as many of Tierney’s commentors said) that competition is, overall, good for scientific progress, and that it doesn’t have to involve unethical conduct. (More on this in a follow-up post; this one’s long enough already!) That’s why Cutler had to go back and clarify things, by saying “Compete, but compete ethically”. The difficulty with talking about all this at the same time is that the groups he ended up collaborating with were (presumably) doing just that. They’re two separate issues. Both topics are very much worth discussing, but not tangled together.
A colleague came by a while ago and said "You know, the comments to that last post of yours are in danger of turning into Monty Python's Four Yorkshiremen sketch". At the moment, things are running about 50/50 between the "lack of equipment teaches you skills" and "lack of equipment wastes your time" camps. . .
The late Peter Medawar once wrote about resources and funding in research, and pointed out something that he thought did a lot more harm than good: various romantic anecdotes of people making do with ancient equipment, of great discoveries made with castoffs and antiques. While he didn’t deny that these were possible, and admitted that you had to do the best with what you had, he held that (1) this sort of thing was getting harder every year as science advanced, and (2) while it was possible to do good work under these conditions, it surely wasn’t desirable.
His most interesting point was that lack of equipment ends up affecting the way that you think about your research. It’s not like people with insufficient resources sit around all day thinking of experiments that they can’t run and can’t analyze. If you know, in the back of your mind and in your heart, that there’s no way to do certain experiments, then you won’t even think about them. Your brain learns to censor out such things. This limits your ability to work out the consequences of your hypotheses, and could cause you to miss something important.
Imagine, say, that you’re working on some idea that requires you to find very small amounts of different compounds in a final mixture. A good LC/MS machine would seem to be the solution for that, but what if you don’t have access to one? You can spend a lot of time thinking about a workaround, which is mental effort that could (ideally) be better applied elsewhere. And if you had the LC/MS at your disposal, you might be led to start thinking about the fragmentation behavior of your compounds or the like, which could lead you to some new ideas or insights – ones that you wouldn’t have if you’d had to immediately cross off the whole area.
If you’re in a resource-limited situation, then, you’ll probably try to carefully pick out problems that can actually be well addressed with what you have. That’s a good strategy, but it’s not always a possible one. Huge areas of research can be marked off-limits by the lack of key pieces of equipment, and by the time you’ve worked out what’s possible, there may not be anything interesting or important left inside your fence. Medawar’s point was that being stuck inside such a perimeter would not only hurt the way that you did your work, but could eventually do damage to the way that you thought.
It occurs to me that this is similar to George Orwell's claim in "Politics and the English Language" that long exposure to cheap, misleading political rhetoric could damage a person's ability to think clearly. "But if thought corrupts language, language can also corrupt thought". There may be other connections between Orwell's points and scientific thinking. . .definitely a subject for a future post.
In fairness, I should mention that the flip side of this situation isn’t necessarily the best situation, either. Having everything you need at your disposal can make some researchers very productive – and can make others lazy. Everyone has stories of beautifully appointed labs that never seem to turn out anything interesting. There’s danger in that direction, too, but it’s of a different kind. . .
Scienceis taking a look at the 1991 members of Yale’s Molecular Biology and Biophysics PhD program. The ostensible focus of the article is to see what the effect of flat federal research funding has been on young potential faculty members, but there’s a lot more to pick up on than that.
The first thing to note is that out of 26 PhDs from that year’s class, only one of them currently has a tenured position in academia. Five others are doing science in some sort of academic setting, but only one of those is tenure-track. And you can tell that for at least a few observers, the response to those numbers is “What went wrong?”
Well, nothing did. As it turned out, the students didn’t necessarily come out of the program on a mission to go out and get tenure. But there was no particular way to blame the research funding environment for the numbers, because almost no one that Science interviewed mentioned that as a factor at all. Instead, many of them decided that there might be something more (or at least something else) to life than going from being a grad student and post-doc directly to. . .supervising more grad students and post-docs:
For some MB&Bers, academia was never really an option. "Even as an undergraduate in college, I never bought into the concept of being a professor," says Deborah Kinch, associate director for regulatory affairs at Biogen Idec in Cambridge. "Being a grad student is the last bastion of indentured servitude, and being a faculty member is pretty much the same thing, at least until you get tenure. Earning the same low salary and fighting for every grant--that was the last thing I wanted to do. . .
. . . Midway through their graduate training, a few MB&Bers hatched the idea of a seminar series to hear from former graduates working outside the academic fold. (Athena) Nagi said the group wrestled with the definition of an alternative career and decided that the answer was, in essence, "anything that didn't involve teaching at a major research university”. . .what (Tammy) Spain remembers most were their reasons for branching out. "They all said they didn't want to go into academia. None of them said, 'I failed.' None had even tried to find an academic job. It was the first time I got the sense that there was no shame in not going into academia."
That heightened sense of empowerment reinforced what some class members were already feeling. "At first, you think that academia makes sense," says Nagi. "But by your 3rd or 4th year, you start to get the lay of the land and look at the options. You realize that a postdoc isn't just for 1 year and that there are multiple postdocs."
I particularly like the way that a third-year graduate student had never realized until then that there was no shame in not going into academia. This is a major problem in academic science – the amount of this attitude varies from department to department, but there’s always some of it floating around. It’s no wonder that some of these people were baffled by the prospect of what they were going to do with their lives, because a large, important range of choices was being minimized or ignored.
But I have no room to talk – by that point in my graduate career, I wasn’t clear about what I was going to do, either. I was getting pretty sure, though, that going off and fighting for tenure at a major university was not in the running. I’d seen what the younger faculty put up with in my department, and it didn’t look much better than the life I was leading as a grad student. In many ways, actually, it was worse. Why would I want to do that?
As it turns out, a good number of the 1991 Yale people ended up at various small biotech companies. Some of them have made a success of it, and naturally enough, some of them are out of science altogether. But the rarest, least likely thing for them to do was to get tenure – or even to try. When I think back on the folks I went to grad school with in the mid-1980s, the picture is very similar. You just wish that there were a way to make this sorting-out process less painful. . .
Here’s an appropriate topic for a Friday, although at first many of you may think I’ve lost my mind. What would happen if you combed the full text of the experimental sections of the chemistry journals, looking for how long people ran their reactions?
I’m pretty sure that I know what you’d see: there would be a lot of scatter in the short time periods, with some peaks at the various half-hour and hour marks just for convenience. But as you went out into the multiple-hour procedures, I feel sure that you’d see pronounced spikes in the data at around sixteen to twenty hours and again at around 72 hours.
Some readers have doubtless started nodding their heads, having done the math. Those times correspond to "overnight" and "over the weekend", and I'm willing to bet that they're over-represented (and how) in the data set. I'll go on to predict scarce examples in, say, the 14-hour or 38-hour ranges - there's not much way to run a reaction for those intervals and not be in the lab too early in the morning or too late at night.
A second-order prediction is that when such reactions are found, that their origins will skew heavily toward academia rather than industry. And I'm also willing to bet that patent procedures will tend to follow the working-day timelines more than the general literature, for the same reasons. My last higher-order prediction is that the reaction times would not, in fact, obey Benford's Law, as many other data sets of this kind do.
As far as I know, no one's ever done this sort of analysis, but I suppose it would be possible, especially for someone at Chemical Abstracts or at one of the scientific publishers. If someone wants to try it, please let me know what comes out. And if the results follow my predictions, please feel free to refer to the title of this post or something similar. I won't object.
There’s an interesting article in Angewandte Chemie by Richard Silverman of Northwestern, on the discovery of Lyrica (pregabalin). It’s a rare example of a compound that came right out of academia to become a drug, but the rest of its story is both unusual and (in an odd way) typical.
The drug is a very close analog of the neurotransmitter GABA. Silverman’s lab made a series of compounds in the 1980s to try to inhibit the aminotransferase enzyme (GABA-AT) that breaks GABA down in the brain, as a means of increasing its levels to prevent epileptic seizures. They gradually realized, though, that their compounds were also hitting another enzyme, glutamic acid decarboxylase (GAD), which actually synthesizes GABA. Shutting down the neurotransmitter’s breakdown was a good idea, but shutting down its production at the same time clearly wasn’t going to work out.
So in 1988 a visiting Polish post-doc (Ryszard Andruszkiewicz) made a series of 3-alkyl GABA and glutamate analogs as another crack at a selective compound. None of them were particularly good inhibitors – in fact, most of them were substrates for GABA-AT, although not very good ones. But (most weirdly) they actually turned out to activate GAD, which would also work just fine to raise GABA levels. Northwestern shopped the compounds around because of this profile, and Parke-Davis took them up on it. One enantiomer of the 3-isobutyl GABA analog turned out to be a star performer in the company’s rodent assay for seizure prevention, and attempts to find an even better compound were fruitless. The next few years were spent on toxicity testing and optimizing the synthetic route.
The IND paperwork to go into humans was filed in 1995, and clinical trials continued until 2003. The FDA approved the drug in 2004, and no, that’s not an unusual timeline for drug development, especially for a CNS compound. And there you’d think the story ends – basic science from the university is translated into a big-selling drug, with the unusual feature of an actual compound from the academic labs going all the way. Since I’ve spent a good amount of time here claiming that Big Pharma doesn’t just rip off NIH-funded research, you’d think that this would be a good counterexample.
But, as Silverman makes clear, there’s a lot more to the story. As it turned out, the drug’s efficacy had nothing to do with its GABA-AT substrate behavior. But further investigation showed that it’s not even correlated with its activation of the other enzyme, GAD. None of the reasons behind the compound’s sale to Parke-Davis held up, except the biggest one: it worked well in the company’s animal models.
The biologists at P-D eventually figured out what was going on, up to a point. The compound also binds to a particular site on voltage-gated calcium channels. That turns out to block the release of glutamate, whose actions would be opposed to those of GABA. So they ended up in the same place (potentiation of GABA effects) but through a mechanism that no one suspected until after the compound had been recommended for human trials! There were more lucky surprises: Lyrica has excellent blood levels and penetration into the brain, while none of the other analogs came close. As it happened, and as the Parke-Davis folks figured out, the compound was taken up by active transport into the brain (via the System L transporter), which also helps account for its activity.
And Silverman goes on to show that while the compound was originally designed as a GABA analog, it doesn’t even perform that function. It has no binding to any GABA receptor, and doesn’t affect GABA levels in any way. As far as I can see, a really thorough, careful pharmacological analysis before going into animals would probably have killed the compound before it was even tested, which goes to show how easy it is to overthink a black-box area like CNS.
So on one level, this is indeed an academic compound that went to industry and became a drug. But looked at from another perspective, it was an extremely lucky shot indeed, for several unrelated reasons, and the underlying biology was only worked out once the compound went into industrial development. And from any angle, it’s an object lesson in how little we know, and how many surprises are waiting for us. (Silverman himself, among other things, is still in there pitching, looking for a good inhibitor of GABA aminotransferase. One such drug, a compound going back to 1977 called vigabatrin, has made it to market for epilepsy in a few countries, but has never been approved in the US because of retinal toxicity).
There is a pecking order in chemistry. That’s because there’s one everywhere. If it’s a human endeavor, staffed by humans, you’re going to have hierarchies, real and perceived - who you did a post-doc with, what huge company you're a big wheel in. But that doesn’t mean that we have to bow down to them, and it doesn’t excuse this sort of thing, from The Chem Blog:
” Waaaaaayyy back at the ACS in San Fran at the poster session, we were walking around and introduced ourselves to this guy standing in front of his poster. Now… old boy (a graduate student) engaged us in some dialog about his poster and we were getting along famously, my friend asking most of the intelligent questions (I was still recovering from giving blood a few hours before and drinking multiple beers immediately after.) As conversations normally flow, he asked us where we were from. I told him my fine institution and my buddy told him his. I assume he wasn’t put off my by school, but the look on his face when my buddy told him where he was from was at first a “are you serious” chuckle, which melted into one of those “do they have a department” and finally to a resound(ing), “I’m done with you.”
I stood there and watched it the whole time. So, my buddy being naive to the ways of the world, kept asking questions but the answers weren’t forthcoming any more. In fact, in the midst of a question my buddy was asking, the guy actually walked away from his poster and started talking to his friends. . .”
Read the rest of the post for the rest of the story, which goes off in a different (and still interesting) direction. But as for this behavior, there’s just no call for it. As far as I’m concerned, if a person is asking intelligent questions, they’ve already provided all the credentials they need to show. Likewise, I reserve the right to discriminate against time-wasting bozos (just as I reserve the right to define that class, although I’ll bet that most of my picks would easily pass a show of hands). But if you’re presenting a poster, you have, whether you realize it or not, entered into an agreement to take on the broad unwashed masses.
Tactfully dealing with the clueless is a learned skill, but no such skill seems to have been called on here. This is tactfully dealing with the intelligent and informed, and if you can’t do that, you have some serious problems. It takes an awful lot of red-hot results to make up for a really obnoxious attitude, and a degree from Big Name U is only partially going to offset one as thick as this. Now, it's true that there are certainly some pretty abrasive folks from BNU, but the ones with the proven big-time track records can at least get away with it. Too many other morons take the shortcut, deciding that the nasty attitude is some sort of essential first step – in some cases, deciding that it and the Big Name is all they need.
Out here in the real world, where Poster Boy has yet to tread, it becomes clear that the wonderfulness of a marquee school background eventually goes stale. There are places in the drug industry where working for particular academic bosses will give you a leg up – for a while. It’s a real advantage to be able to get in the door that way, no doubt, but once you’re through the door you generally have to produce something. (And it’s good to keep in mind that even these advantages don’t necessarily last forever. A rollicking management purge can destabilize an old-boy network very quickly).
No, doing lots of work and doing it really well is a better long-term strategy. (Another part of that strategy is to make sure that people know who’s doing it, but that's a topic for another day). And having a personality that makes people grit their teeth and wait for you to leave is not such a good long-term plan. I wish Poster Boy well, but I hope that he has a lot to talk about. This isn't one of those businesses where you can get by on looks.
I also mentioned recently that I’d come across a good example of an academic compound with interesting activity but no chance of being a drug. Try this one out, from Organic Letters. Yes, there aren’t many other compounds that do what this one does (inhibit the production of TNF-alpha). And no, it’s not going to be a drug – well, at least the odds are very, very long against it.
Why so negative? Several reasons. For one thing, this molecule is extremely greasy. This is not a killer in and of itself, but it’s inviting trouble, for the reasons noted here. The second problem is that this thing looks like it’s going to have some trouble dissolving. That’s trouble both from both the thermodynamic (eventual amount in solution) and kinetic (speed of dissolution) senses. That greasiness will be the problem with the former, since a lot of this molecule’s surface area gives water molecules no incentives to join in on anything. And all those aryl rings (along with the symmetric structure) are asking for trouble with the latter. Those features make the structure look like it’ll form a very good, very happy crystal, with its aromatic rings stacked onto each other like ornamental bricks. “Brick” is the very word that comes to mind, actually.
But solubility is only the beginning. The real problem is that catechol functionality in the center of the molecule, which is just waiting to turn into a quinone. In medicinal chemistry, no one wants quinones; no one likes them. They’re just too reactive. It would not surprise me for a minute to learn that this group, though, is the reason for the compound’s activity. It’s probably reacting with some functional group on the surface of the target protein and gumming up the works that way. It’ll do that to others, too, if it gets the chance. There are all sorts of weird little quinones in the literature that hit proteins that nothing else will touch, but none of them are going anywhere.
No, it’s safe to say that any experienced drug-company chemist would draw a red X through this one on sight. Plenty of reasonable-looking compounds turn up with unanticipated problems, so we don’t need to go looking for trouble. That’s not to say that it can’t be a research tool (although I’d be careful interpreting the data from complex systems – there’s no telling how many other things that quinone is going to react with).
But all this brings up another thing that we were talking about around here – how much do drug companies owe academia for working out fundamental biochemistry and molecular biology? What if someone uses this very compound, for example, as a research tool and discovers something about its target that could be used to develop an actual drug? What do we call that?
Well, we call that “science”, as far as I can see. Everything is built on top of something else. In a case like this, the discoverers of this current compound, even if they’ve patented it, do not have a claim on what discoveries might come from it later on. An even stronger case was decided in that direction – the University of Rochester’s discovery of the COX-2 enzyme, the patent for which led to their attempt to claim revenue from Celebrex. The judge ruled, absolutely correctly in my opinion, that the discovery of a drug target is not the discovery of a drug, and that the effort and inventiveness needed for that second step is more than enough for it to stand on its own.
There’s a “research exemption” for patents, giving legal room to use the disclosed inventions and compounds to make further inventions. I think that’s an extremely important concept. It lets academic labs study patented industrial compounds for their own purposes, and it even lets companies do that to each other. How would we compare our internal compounds to the competing ones if we couldn’t use them? (There’s more than one research exemption, though, and the traditional common-law one took a big hit a few years ago in Madey v. Duke, which worries me).
I strongly oppose broad patent claims for uses and pathways, because I think that these cut into legitimate research. Patents should cover things that are novel and useful. They should completely disclose the substance of their invention. And in return for the period of exclusive rights, anyone else who wants to should be able to get to work on what will replace them. A patent is not a license to kick back; it’s a reminder to keep moving.
The mention of tropical diseases here the other day turns out to be timely, since the latest Nature has several articles on various ways for industry and academia to partner on attacking these. Some adjustments are needed every time you try this sort of thing, naturally. I particularly enjoyed this article. Here’s a sample:
“. . .translational research requires skills and a culture that universities typically lack, says Victoria Hale, chief executive of the non-profit drug company the Institute for OneWorld Health in San Francisco, California, which is developing drugs for visceral leishmaniasis, malaria and Chagas' disease. Academic institutions are often naive about what it takes to develop a drug, she says, and much basic research is therefore unusable. That's because few universities are willing to support the medicinal chemistry research needed to verify from the outset that a compound will not be a dead end in terms of drug development.
Academics will currently publish, say, a chemical scaffold, which they bill as a potential new target for parasites. "But had a medicinal chemist looked at it, he might immediately see that it will never work as a drug, because it has an inappropriate solubility or toxicological profile," says Els Torreele, a product manager at the DNDi. "Having a chemical structure that kills your parasite is only one of many aspects of what makes a drug a drug”.
Ted Bianco, director of technology transfer at the Wellcome Trust in London, agrees. "It's fine if a researcher is just using a compound as a ligand to probe a biological process," he says, "but don't kid yourself it's a drug unless you ask whether it has druggable properties." What's needed, says Hale, is a 'target product profile', which sets out the appropriate drug chemistry properties. "Getting a drug through regulatory processes is not just about how good your science is and how great your trials are; it is much more complex," says Hale. "And academics don't have the experience — they need to hire people from the drug industry."
This would make particularly interesting reading for the NIH-funding-discovers-all-the-new-drugs crowd. That idea seems pretty indestructible, although you’d think it would at least be dented by talking to the people who actually try to develop drugs (like me, or many readers of this blog), or to the people who are actually partnering with academia (see above).
I first came across this whole debate a few years ago, not having even realized that it was a debate at all. Even now, when I tell co-workers in the industry that there are people who believe that pretty much all drugs come right out of from publicly funded research, the usual result is an incredulous stare and a burst of laughter. That’s often followed by a question like “So what is it that I’m doing all day, then?”
Unfortunately, there really are occasional examples of companies scooping things up and making a killing on them – an example will follow in a coming blog post. And on the flip side, I have a recent example coming up of an academic compound which may well do exciting things in a dish, but has as much chance of becoming a drug as I do of becoming an Olympic pole-vault champion. And it’s not that I’m not reasonably aerodynamic – it’s just that there’s more to the pole vault than that, and there’s more to making a drug than working in vitro.
The doctorate-or-not discussion is roaring along in the comments to the last post, and they're well worth reading. I have a few more thoughts on the subject myself, but I'm going to turn off comments to this post and ask people to continue to add to the previous ones.
One thing that seems clear to a lot of people is that too many chemists get PhD degrees. I'm not talking about the effect of this on the job market (more on that in a bit) so much as its effect on what a PhD is supposed to represent. So, here's my take on what a PhD scientist is supposed to be, and what it actually is in the real world. I'm going to be speaking from an industrial perspective here, rather than an academic one, although many of the points are the same.
Ideally, someone with a doctorate in chemistry is supposed to be able to do competent independent research, with enough discipline, motivation, and creativity to see such projects through. In an industrial applied-research setting, a PhD may initiate fewer projects strictly from their own ideas, but they should (1) always be on the lookout for the chance to do so, (2) be willing and able to when the opportunity arises, and (3) add substantial value even to those projects that they themselves didn't start.
That value is both creative and managerial - they're supposed to provide ideas and insights, and they're supposed to be able to use and build on those of others. They should be able to converse productively with their colleagues from other disciplines, which means both understanding what they're talking about and being able to communicate their own issues to them. Many of these qualities are shared with higher-performing associate researchers, who will typically have a more limited scope of action but can (and should) be creative in their own areas. Every research program is full of problems, and every scientist involved should take on the ones appropriate to their abilities.
So much for the ideal. In reality, many PhD degrees are (as a comment to the previous post said) a reward for perseverence. If you hang around most chemistry departments long enough as a graduate student, you will eventually be given a PhD and moved out the door. I've seen this happen in front of my eyes, and I've seen (and worked with) some of the end results of the system. The quality of the people that emerge is highly variable, consistent with the variation in the quality of the departments and the professors. Unfortunately, it's also consistent with the quality of the students. But it shouldn't be. The range of that variable shouldn't be as wide as it is.
There are huge numbers of chemistry PhDs who really don't meet the qualifications of the degree. Everyone with any experience in the field knows this, from personal observation. You will, I think, find proportionally more of these people coming out of the lower-quality departments, but a degree from a big-name one is still far from a guarantee. The lesser PhD candidates should have been encouraged to go forth and get a Master's, or simply to go forth and do something else with their lives. They aren't, though. They're turned loose on the job market, where many of them gradually and painfully find that they've been swindled.
Over time, the lowest end of the PhD cohort tends to wash out of the field entirely. There are, to be sure, many holders of doctoral degrees in chemistry who go into other areas because of their own interests and abilities. But there are also many jobs that make an outside observer wonder why someone with a PhD is doing them, and that's where many people end up who shouldn't have a doctorate in the first place. Others, somewhat more competent, hold on to positions because they're able to do enough to survive in them, if no more. While there are plenty of bad or irrelevent reasons for people not to be promoted over the years, some cases aren't so hard to figure out.
Those, then, are my thoughts on the doctoral degree. What can be done about this situation, if anything, will be the subject of a future post. I have another set of opinions on the Master's degree and its holders, which I'll unburden myself of a bit later on. Comments, as mentioned, should go into the discussion here.
There's an unusual article in Nature that several folks have e-mailed me about. It's unusual for several reasons. For one thing, it's synthetic organic chemistry, and there's not much of that in Nature at all - it's an interesting choice of journal on the part of the authors, Phil Baran of Scripps and two of his students, Thomas Maimone and Jeremy Richter. The title also gives away the other odd feature (as a title should): "Total Synthesis of Marine Natural Products Without Using Protecting Groups".
I was talking about protecting groups here just a couple of months ago. In synthesizing complex molecules, they're often necessary, because there will often be several similarly reactive groups exposed at the same time, and you need to be able to distinguish them. Or you'll need to do something severe to another end of the molecule-in-progress, which an amine or alcohol somewhere either won't let you do or won't survive if you try.
The trouble, as any synthetic chemist can tell you, is that protecting groups introduce their own complexities. Ideally, you want to be able to put them on and remove them with no loss of material, but that's impossible. Ideally, you'd want each one to be removable under conditions that won't disturb any of the others, or anything else in your molecule, but that can be a tall order too as they start to add up. And ideally, you'd want all of them to be able to stand up to anything else you'd like to do, until it's time for them to leave, but that's not available in the real world, either. Sometimes a big part of the work (mental and physical) that goes into a total synthesis is figuring out how to manage all the protecting groups.
Baran makes the case that this has gone too far. He's made several complex molecules without protecting anything at all. There's a price to be paid, of course - some of the steps along the way have not-so-impressive yields because of the bareback conditions. But the counterargument is that the overall yield of the synthesis is often higher in spite of this, because there are so fewer steps, and the cost and complexity are cut similarly.
Of course, you can't do this by just plowing ahead with the same reactions that a protecting-group-laden synthesis would use. They're on there for a reason, and that method would send you right into the ditch. Baran tries instead to mimic the biochemical synthesis of these molecules as much as possible, since after all, cells don't use protecting group chemistry, either.
This is an idea with a long and honorable history in organic chemistry, starting with Sir Robert Robinson's startling one-pot synthesis of tropinone back in the 1917. That one is usually taken as the father of all biomimetic syntheses, although it's been pointed out (by no less an authority than Arthur Birch) that this is partly a legend. But it's a legend that has performed function of its reality, leading to a whole series of biologically-inspired syntheses. This latest paper is a call to make biomimetic synthesis the centerpiece of the field again.
I'm sympathetic to that view, but it's not going to be easy. Read closely, the paper shows that this kind of work can be very difficult indeed, even when the biogenic pathways to your target molecules have been studied (which isn't always the case). There are a lot of steps here that required careful coaxing to work in reasonable yields, or at all - no one should confuse the lack of protecting groups with a savings in time. And these difficulties also undermine the claim of reduced cost and complexity a bit, since they represent plenty of time and effort - and if they aren't synonymous with cost and complexity, I don't know what is. Academia may obscure this a bit, since we're only talking graduate student labor here, but it's a real issue.
Where I see this making an impact industrially is in process chemistry. Many times companies work out several parallel routes to an important drug substance, looking for the lowest overall cost. That's where attention to no-protecting-group methods could pay off. Process groups already try to avoid these steps anyway, for the same reasons.
But for the most part, drug substances aren't so complex that they need lots of protecting group manipulation. We could always try to get into more complicated structures through these routes, but this leads to a chicken-and-egg problem. The medicinal chemists generally don't have the time to investigate the picky conditions needed to make no-protection chemistry work, so they're not going to have access to the shorter, higher-yielding syntheses needed to do analoging work. (And there's the real problem that these analogs might need complete re-optimization of the trickier steps each time, which would be a real nightmare). The process chemists would have the time and mandate to work out the no-protection stuff, on the other hand, but if med-chem can't deliver a good drug candidate, then they have nothing to optimize.
The Nature link above is subscriber-only, but you can read the supporting information with all its synthetic details here if you like. It's a pretty big PDF file, though, so be warned. I'd be interested to hear what readers, both academic and industrial, think about this one.
So, as reader CalProf asked in a comment the other day, what should academic scientists who want to help discover drugs be doing?
As a first approximation, I'd say not drug discovery. That sounds a bit strange, I know, but there are some good reasons behind it. Modern drug discovery takes a lot of resources, from several rather widely separated fields, and it's not easy to bring all the necessary people together in an academic environment. You need med-chemists to make the compounds, pharmacologists and molecular biologiss to develop and run the primary and secondary assays, in vivo people to dose and evaluate effects in the animal models (which they'll also need to develop, in many cases), toxicologists, formulation chemists, computer modelers, scale-up chemists. . .and it's a great help to have people in each of these departments who've done this kind of thing many times and know all the obvious pitfalls. It's a lot easier to organize this as a company where everyone is hired to do their specialty, rather than try to run it with whatever post-docs and grad students you have handy.
But that doesn't mean that academia can't play a big role. They already do, of course, in doing much of the basic biochemical research that leads to new drug targets. Unraveling which protein interacts with which in some important cellular process is as basic as it comes, and most of the time that won't lead to drug one - but once in a while those experiments will set the entire industry off on a chase.
Another place where some academic thinking could come in very useful would be in attacking the important pharmaceutical processes that we don't understand: things like pharmacokinetics, oral availability, human versus animal toxicology, and (lots of) better disease models. The inefficiencies in these areas caused by our lack of knowledge are costing everyone billions of dollars - any improvement at all would be good news. Of course, it's not like the industry hasn't taken a crack at them, too (after all, there are those billions of dollars out there to be rescued from the bonfire). But we really need every approach we can get, and some fresh thinking would be welcome.
Want some more in that vein? Better ways to dose large proteins. New formulations, so that insoluble gunk like Taxol could be given in a dosing vehicle that doesn't occasionally give people anaphylactic shock. Some hope of predicting blood-brain barrier penetration. More understanding of active transport of drug-size molecules, and how it varies between species and among different cell types. Make no mistake, these are hard problems. But whoever can make real progress on them will get plenty of recognition, plenty of funding, and will be a flat-out benefactor of humanity to boot.
I had some e-mail from a graduate student in a good lab the other day, and I thought the questions raised were worth a blog post. He wrote:
One thing which stands out to me is your enthusiasm for chemistry,
after having been in pharma for a while. This is something which I am
afraid I might lose getting out of academics. I actually was strongly
leaning academically until recently. It just seems the chemical problems you
would be presented in industry are very vanilla....the problem is I
really don't have a good grasp on what these are (especially in drug
Then I imagined in drug discovery, you can use any chemistry you want,
so the "cutting edge" (i.e. new organometallic transformations with way
too much expensive catalyst) is still very relevant. I guess I'm just
curious how you stay as passionate about the science as you are. Do you
see this/has this changed since you started in industry? As you move up
the ranks and further from the bench does chemistry get less and less
These are definitely worth asking. My reply was:
?As for the enthusiasm part, I may be a little bit odd, but not all that much. There are still plenty of people who enjoy what they're doing.
But part of it is realizing that chemistry is a means to an end in the drug business, not an end in itself. People are enthusiastic about finding something that works as a drug - that's why we don't mind mundane reactions as much, because those give you a lot more shots at making a drug than something that needs 2 days to set up. Of course, if you do nothing but (say) make sulfonamides all day, every day, you'll go nuts. But things vary too much for that to be a problem (most of the time). There's always another new structure idea that you have to figure out how to realize, another new core to work on, etc.
And the chemistry problems are just as knotty as you'd get in academia - how do I set these stereocenters, how do I do this reaction selectively so I can avoid a protecting group, etc. Sometimes they're on a different wavelength as well: How can I make this stuff in fewer steps? How can I avoid that evil mercury reagent? How do I get this stuff to form the right polymorph? How can I get to an intermediate that'll let me sit back and crank out a few analogs, instead of making everything from the ground up?
But, as I said, chemistry is means to an end. And the non-chemical problems are a lot harder: how do I get these compounds to have higher blood levels? (Next question - why are they so low now? Do they not get in through the gut, or are they getting whacked by the liver, or are they partitioning into some other tissue, or getting hosed out extra fast by the kidneys?) Why does this compound work, but the one without a methyl group kill the rats? (I've had that exact situation - truth be told, we never did completely figure out what was going on. . .) Why does this thing work so much better in mice than rats, and which one is going to be more predictive of humans - if either? And so on.
So, in a way, the chemistry problems take up less of your time the further on you go. Biology and development problems pick up the slack, and then some."
I'd be interested in hearing other takes on these, and I'm sure my correspondent would, too. Any industrial readers care to add some details?
There's an article in Wednesday's Wall Street Journal (subscriber-only link here) (Update: also available freely here - thanks to Kyle of The Chemblog for finding this) on Merck's head of research, Peter Kim. It's well-written, in the sense that depending on how you come to it, you could come away with very different conclusions. If you're a fan of Kim and his approach since he took his current job, then you may well see a portrait of a driven, hard-working scientist struggling to change an insular, arrogant research culture and drag it into the real world. But if you're not so sure about Kim's managerial virtues, you can find evidence that he's in well over his head.
As the article notes, one of the big changes he's made is the number of deals that Merck has been signing. To be fair, the company was probably going to pick up the pace on outside collaborations anyway when its late-stage pipeline took so many hits, but maybe not to this extent. Much is made of a "charm school" operation where Merck's people were supposedly told not to be so haughty with potential small-company partners. I find it hard to imagine that this made a huge difference, though. Merck most certainly does have an attitude, even now, but I have to think that small company pitchmen are used to getting the same stuff everywhere they go.
Everyone knows the score at these presentations. The people from the smaller outfit are saying "We have something that you don't. Even though you're big and have more money than we do, believe us, you want this." And their counterparts on the other side of the table are saying "Prove it. We know that you think we're a big piggy bank to be turned over and shaken, but no nickels are coming out until you show us something more than snappy PowerPoints". The glad-handing approach that the article portrays Kim as using sounds to me like a recipe for overpaying for deals.
But my favorite part is on the various departures that have taken place:
Soon after he arrived, he angered Emilio Emini, Merck's senior vice president of vaccine research. During his 20 years at the company, Dr. Emini had done some seminal AIDS work. Dr. Kim wanted to hire another accomplished but controversial AIDS researcher, David Ho, to oversee him. Dr. Emini strongly objected. . .(and) left Merck in early 2004. He now works for rival Wyeth. . .
Vetern Merck research managers such as Kathrin Jansen, who was instrumental in the devleopment of (cervical cancer vaccine) Gardasil, and Scott Reines, a top researcher in psychiatric diseases, also took jobs at other pharmaceutical companies. . .Dr. Kim hired other academic scientists who enjoyed good reputations but, like hiim, had never developed a drug. . ."
Not having developed a drug is no particular shame - all of us in the industry start out never having done that. The thing is, we also start out knowing that everyone else in the place knows more than we do about it. High-level academia transplants have a poor track record in the drug industry - if you'd like some more evidence, you can ask some people with a few years of experience at Bristol-Meyers Squibb. Kim is probably correct when he says that Merck had too much of a "That's not how we do things here" attitude, but people sometimes forget that academia has no immunity to that disease, either.
Update: I also recommend checking out the take at Health Care Renewal, from an ex-Merck employee.
I mentioned phosphatase inhibitors while talking about okadaic acid the other day, and that brings me to a paper from the journal ChemBioChem(6, 1749) that I was recently reading. It's a collaboration from six German academic groups, led by one at the Max Planck Institute for Molecular Physiology in Dortmund. And there are some things about it that just don't seem to make much sense.
On the surface, everything's fine. They're investigating some cyclic peptide derviatives called stevastelins, which are microbial natural products known to show some phosphatase inhibitor activity. They produced some synthetic analogs of the natural products and ran them against several phosphatases of interest. They then turned around and did the same thing with some analogs of two more phosphatase-inhibiting natural products, roseophilin and prodigiosin. (For those of you who've done some bacteriology, that first compound is responsible for the red color of Serratia marcesens colonies).
Then the paper makes a sharp turn, as they move on to a 20,000 compound library that's been assembled by a German academic team. They screened this against their panel of phosphatase enzymes, and came up with 8 or 10 pyrrolobenzoic acid structures that showed some inhibitory activity. End of paper.
Well, the way I've presented this, it sounds like a fairly reasonable paper, if a bit of a hodgepodge. But it's the way everything's presented that makes me wonder. For example, their first group of stevastelin analogs is (for the most part) inactive against the five phosphatases they assayed. One of them hits the Cdc25a enzyme, one of them hits PTP1B, and one of them is active against MptpA, all of which are legitimate drug targets. But these compounds are all around 10 to 15 micromolar, which potency doesn't exactly make me leap up out of my chair.
But the authors refer to this as "pronounced selectivity for individual phosphatases". If you read the fine print, the "not active" values are compounds that were 30 micromolar and worse, so we could easily be looking at just two- or three-fold selectivity here. That is not my definition of "pronounced". Add that to the very weak potency, and you have results that I would toss if I saw them come out of a screening run. As a medicinal chemist, I'd start to get really interested at about a hundred times the potency of these compounds, and I'd be willing to bet that by that time the selectivity, if it's really there, would be long gone.
Their other natural product analogs are similar - one's as good as 3 micromolar against PTP1B, but others start to hit the 30 micromolar ceiling of the assay again. Even the active compound has a very unappealing chemical structure, which would only be developed by a desperate drug company indeed. (I particularly enjoy one of them that's reported against MptpA as "28.7 +/- 9.7" micromolar).
What's also irritating is the statement the authors make to justify all this: "We have previously forwarded the notion that biologically active natural products should be regarded as evolutionarily selected and biologically prevalidated starting points for inhibitor development." I'm glad they brought that up, since drug development from natural products has only been a popular technique for a century or so. The problem, as they're demonstrating here, is that if these compounds really are evolutionarily selected as phosphatase inhibitors, and the last hundred million years have only given you micromolar potency, then the odds of being able to push that lower by making half a dozen analogs are rather slim.
And that brings us to their screening efforts. Their compound library is "selected due to their diverse representation of reportedly bioactive scaffold elements". But 20,000 small molecules, however carefully selected, is not a very large collection. And when you get down to it, our compound collections in the drug industry are also supposed to represent a lot of reportedly bioactive scaffolds, and most of them are a couple of orders of magnitude larger.
The compounds from the screen are all micromolar. One of them looks a bit interesting, and possibly selective between the two kinases they ran it against. (What happened to the other enzymes by this point in the paper, I wonder?) I wouldn't want to try to develop these guys, but with the application of a lot of time, money, and effort, you might be able to get somewhere. Or you might wipe out within six months, which is how a lot of projects go, even the ones with better starting points than this, which is most of them.
Ah, but the authors are more optimistic than I am, because (I suspect) they haven't actually tried to do any drug development. "Further application of medicinal chemistry methodologies should allow for the development of more potent inhibitors for subsequent biological investigations in iterative cycles", they say. Oh, yes. Shouldn't it always?
Why am I going on at this length? Because I think that this paper illustrates a general problem: many academic labs do not understand what drug discovery entails, and (worse) they don't realize that they don't understand. The attitude shown here - presenting a few micromolar compounds as fine lead compounds and saying that med-chem should be able to sort things out - would actually be a good way to get fired at most companies. If this paper's data were somehow presented to me as a rationale for starting a project, I would create a distraction and dive for the door. No, there's still a long way to go.
There's an interesting letter to Science in the latest issue (Nov. 4, #5749, p. 777), in response to their special section on drug discovery in the July 29th issue. Adrian Ivinson, a former editor of Nature Medicine and now head of a new research center at Harvard Medical School, writes that the section:
". . .did not recognize an increasingly relevant but underappreciated and underutilized role for academic research in drug discovery.
Universities invest may millions in basic research that exposes disease mechanisms and therefore unearths new targets. Yet few have invested in the relatively modest infrastructure required to put their discoveries to the test. As a result, many promising targets gather dust on the university shelf. . ."
Really? Send 'em over here. I've spent a lot of time defending the way the drug industry takes basic research from academia and turns it into applications. (See the September 9th, 2004 post here and work up from there if you're interested). The usual complaint is that that's all we ever do, so it's refreshing, in a weird way, to hear a complaint that we're not taking enough. But if these targets are being published somewhere even semi-reputable, believe me, we're seeing them.
And as for the "relatively modest infrastructure", that depends on what you mean by modest. For example, the research site I work at does no manufacturing, no human trials, no large animal toxicology studies, and very little scale-up chemistry (just enough to get through two-week rodent runs). But we have hundreds of people working here, in several rather large and expensive buildings crammed full of expensive stuff. Now, it's true that we're working on a number of projects simultaneously - just how many, I'm most certainly not going to say. But you'd need a lot of this stuff around no matter how few projects you were developing.
Dr. Ivinson goes on to say that assay development and validation, compound screening, medicinal chemistry and preliminary animal tests are functions "well suited" to academia. Perhaps, perhaps. But it should be noted that there are some well-known people (such as Stuart Schreiber) with experience in both academic and industrial research who worry about academia's ability to do this sort of thing. He also says:
"Demonstrating a credible mechanism and target, proprietary lead compounds, and preliminary in vivo efficacy will be enough to bring some of our industry colleagues back to the table."
That it will! Be prepared, though, to drop more than a good-sized grant application's worth of money to do that, though. It's harder than it looks to get that far. And those proprietary compounds might scare away as many companies as they attract, by the way. Proprietary means, of course, that you guys own them, and that means that we have to buy them. We'd naturally much rather have our own compounds. That would mean demonstrating proof of concept with something that's not patentable, but there are worse things. We can always screen, and believe me, we have a lot more things in our screening files than you do.
As I've said, I think that Dr. Ivinson is underestimating the difficulty of drug discovery, but at least he realizes that it's worth doing. The letter finishes with a sentiment that I can only applaud:
"But this will only happen when academics stop treating drug discovery as the intellectually inferior domain of the commercial sector and start seeing it as the natural development of their research."
A reader at a large research university sends this along for comment:
"My advisor is a staunch skeptic of the value of "big pharma". He recently made a comment in a group meeting that "Merck has not discovered anything in 25 years. They don't do research, they acquire it. In fact, I don't know why they even have chemists and biologists, maybe they feel they have to..."
Well. I realize that there's a lot of good-natured sniping between industry and academia, but that kind of crosses the line, doesn't it? The first thing that I feel like saying to this professor is that Merck, which is indeed one of those Big Companies That Makes Money, presumably doesn't employ an army of expensive chemists and biologists for cosmetic reasons. So if you can't figure out why they've kept such people around for decades, perhaps there could be valid reasons that you haven't fully appreciated. It's a hypothesis worth considering, and that would be a higher-percentage move than assuming that the company must be so thickheaded that it hasn't yet figured out that it could fire everyone. That's an interesting approach to the data, sort of like trash-canning any experiment that didn't fit your original assumptions. You don't do that, though, right?
This is an especially rich comment when applied to Merck, which does as much (or more) fundamental research as anyone in the industry. If you want to talk about just going out and buying your stuff, snipe at Pfizer. But Merck is famous for digging into its own projects for years and years until they get them to work. Perhaps a look at a search for "Merck" in PubMed would illustrate the point?
Maybe the problem is that phrase "discovered anything." I've found that some university-based scientists actually take that to mean "discovered anything that would make a neat article in Cell". In the drug industry, our definition is more like "discovered something useful that no one else has done before". And "something useful" means "something that improves a person's health enough that they're willing to pay us money for it". I realize that I've introduced the monetary snake into the Garden of Pure Research, but ah, what choice do we have? They don't give out grants big enough to pay for what we do.
I'm willing to bet that you're thinking about the case of the COX-2 inhibitors. As many people have heard, Merck made quite a bit of money until recently selling one of those. The University of Rochester had a patent on the enzyme and its use as a screening tool, and sued Searle (now Pfizer). But they were trying to reach through and claim a share of the profits for the drugs found through this method (while not, last I heard, offering to soak up any of the recent losses). This suit failed, and it's worth remembering why:
As one of my readers put it, Rochester discovered a new shovel, and laid claim to any gold that might be dug up with it. That's an excellent metaphor, and I'd extend it to say that they were laying claim not just to the raw gold, but to the finished jewelry. The gap between a basic discovery and a drug is much, much wider than even well-educated people seem to realize.
I could go on, and have. But I think I'll close with an item from this morning's news wires. Merck has announced that they have successfully tested a vaccine that will likely prevent the vast majority of cervical cancers. That must have been accomplished by their idle scientists in those brief intervals between cackling with glee while they threw stacks of hundred dollar bills into the air, but I'm glad they took the time to do it. Does this, I'd very much like to know, count as a discovery? After all, vaccines have been known for a long time. Heck, cervical cancer isn't a new disease either, nor is its association with the HPV viruses. I'll bet Merck couldn't get this study published in Cell, or even PNAS. They'll have to settle for the front pages of virtually every newspaper in the world. Time to kick back for another twenty-five years!
I'm not saying these are all true, or true all the time. But here are three things that industrial pharma researchers tend to believe about academic ones:
1. They talk too darn much. Don't even think about sharing any proprietary material with them, because it'll show up in a PowerPoint show at their next Gordon conference. How'd that get in there?
2. They wouldn't know a real deadline if it crawled up their trouser legs. Just a few weeks, just a few months, just a couple of years more and they'll have it all figured out. Trust 'em.
3. They have no idea of how hard it is to develop a new compound. First compound they make that's under a micromolar IC50, and they think they've just discovered Wonder Drug.
And (fair's fair), here are three things that academic researchers tend to believe about industrial ones:
1. They have so much money that they don't know what to do with it. They waste it in every direction, because they've never had to fight for funding. If they had to write grant applications, they'd faint.
2. They wouldn't know basic research if it bonked them on the head. They think everything has to have a payoff in (at most) six months, so they only discover things that are in front of their noses.
3. They're obsessed with secrecy, which is a convenient way to avoid ever having to write up anything for publication. They seem to think patent applications count for something, when any fool can send one in. Try telling Nature that you're sending in a "provisional publication", details to come later, and see how far that gets you.
You hear an awful lot about teamwork when you're in industry. (Personally, my fist clenches up whenever I here the phrase "team player", but perhaps that's just me.) But there's a bit of truth in all this talk, and it's something that you generally don't encounter during graduate training.
As a chemistry grad student, you're imbedded in a chemistry department, and most outside groups will either be irrelevant or there to service things for you. Getting along with people outside your immediate sphere is useful, but not so useful that everyone makes the effort. But pharmaceutical companies have a lot of different departments, and they're all pretty much equal, and they are all supposed to get along. You've got your med-chem, your pharmacology, the in vivo group (or groups, who may be stepping on each other's toes), metabolism, PK, toxicology, formulations. . .as a project matures, everybody gets dragged in.
These other folks do not see themselves, to put it mildly, as being put on earth to service the medicinal chemistry group. They are very good at detecting the scent of that attitude, and will adjust theirs accordingly. (Some of them already have filed chemists in the "necessary evil" category.) For the most part, no one is supposed to be able to pull rank on anyone else, so in order to get things done, you'll have to play nicely with others.
Not everyone figures this out. I watched someone once whose technique of speeding up the assay results for his compounds was to march down to the screening lab and demand to know where his procreating numbers were, already. No doubt he thought of himself as a hard-hitting, take-charge kind of guy, but the biologists thought of him, unsurprisingly, as a self-propelled cloaca. His assay submissions automatically got moved to the "think about it until next Tuesday" pile, naturally.
There was a good question asked in the comments to the previous post on first job interviews: what do you talk about when you work at one company and you're interviewing at another?
Well, I've done that myself, more than once (note to my current co-workers: not in the last few years, folks.) And it can be tricky. But there are some rules that people follow, and if you stay within their bounds you won't cause any trouble. That's not to say that my managers wouldn't have had a cow if they'd seen my old interview slides at the time, but I was at least in the clear legally. Here's how you make sure of that:
First off, it would be best if you could confine your interview talk to work that's been published in the open literature. That stuff is, by definition, completely sterilized from an intellectual property standpoint, and you can yammer on about it all day if you want. The downside is that published work tends to be pretty ancient stuff by the time it shows up in a journal, and you've may have done a lot more interesting things since then. (The other downside is that published projects are almost always failed projects.) Work that's appeared in issued patents is also bulletproof, of course, but it suffers from the same time-lag disadvantages.
Second best is work that's appeared in patent applications. This stuff hasn't been blessed by the patent office yet, so things could always change, but it's at least been disclosed. When you talk about it, you're not giving away anything that couldn't have already been downloaded and read. (Of course, you do have to resist the temptation to add lots of interesting details that don't appear in the application.)
If you've at least filed the applications, then you can still be sort of OK, since they're going to publish in a few months, anyway. This is a case-by-case thing. If the company you're interviewing at is competing with you in that very field, you'd better not give them a head start. But if you're talking antivirals at a company that does nothing but cardiovascular and cancer, you should be able to get away with it. It would be best if you didn't disclose full structures - leave parts of the molecules cut off as big "R" groups and just talk about the parts that make you look like the dynamic medicinal chemist you are.
The worst case is "none of the above." No published work worth talking about, no patent applications, no nothing. I actually did go out and give an interview seminar under those conditions once, and it was an unpleasant experience. I had to talk about ancient stuff from my post-doc, and it was a real challenge convincing people that I knew what was going on in a drug company. I don't recommend trying it.
But I don't recommend spilling the beans in that situation, either. I've seen a job interview talk where it became clear that the speaker was telling us more than he really should have, and we all thought the same thing: he'll do the same thing to us if he gets a job here. No offer.
I've been seeing quite a few candidate seminars recently, so allow me to pass on some advice to those of you out on the first-job-in-the-drug-industry trail.
First off, some presentation tips: Speak up, if possible. I hear ten too-soft seminars for every too-loud one. Don't give your talk to the screen - either the one on your laptop or the one on the wall. Give it to the people in the room. Look up, turn around, do what you need to do to give them the sense that you're passing information on to them. Find a way to sound somewhere between the extremes of here-is-my-script and gosh-I-don't-remember-this-slide.
As for that information, slides in a scientific presentation should have a medium amount of information on them. A whole slide with one big reaction on it is OK during the introduction, but you'd better fill things out a bit as you move on in the talk. Your audience can tell if you're padding things out.
But don't make the opposite error, putting all your information on one slide in One Big Table. You might think it looks more impressive that way, but it's just irritatingly illegible and uninterpretable. Spread those big data heaps out a bit into coherent piles - put all the aliphatic examples on a slide, followed by the aromatic ones, and so on. You'll find more things to talk about that way, too.
Be honest. If you have to come in with a thin talk, for whatever reason, admit it to yourself and be prepared to admit it in some fashion to your audience. Find some ways to show them that you know more than your slides can illustrate. And don't try to pretend that your results are groundbreaking and exciting, unless they really, really are. Exciting results usually speak for themselves, and your audience will know 'em when they see 'em.
Be prepared for the obvious. If you put a weird reaction up on the screen, someone is going to ask you about the mechanism. If you have some unusual results in a series, someone's going to ask you why you think they came out that way. Be ready with some ideas - it can be fine to not know the answer yet, as long as you've shown that you've thought about what the answer might be. Looking unprepared for down-the-middle pitchs like these will get you crossed off the list very quickly.
And look as if you can learn. No one comes into the drug industry knowing what they really need to know. It comes with experience, and you need to make it clear that you're the sort of person that experience is not wasted on.
That should help. I'll settle for a fee of 10% of your first year's salary, OK?
I thought I'd briefly explain one of my "Ten Questions" from the other day. The old-fashioned qualitative organic tests that I mentioned in #4 are things that were used in the 1960s and before to identify classes of compounds. Various brews can give you color indicators for the presence of double bonds, methyl ketones, aldehydes and the like. Some of them are quite dramatic - Tollens reagent, for example, suddenly deposits a silver mirror layer (scroll down on that link to see it) on the inside of the flask when it goes right.
But no one uses this stuff any more. No one at all, at least not if they can help it. Modern methods like NMR and routine HPLC/mass spectrometry have completely destroyed the usefulness of the old chemical tests, because you can now find out far more about your compound with little or no destruction of the sample.
Some undergraduate courses apparently still have these reactions in their curricula, and the only reason I can see is inertia. I've heard rationalizations about using them to teach reaction mechanisms and so on, but you can do that just as easily with reactions that real chemists actually run in the real world. And why wouldn't you? If you're a student that's been asked to run a battery of qualitative organic tests, you should ask for a refund of your tuition. You're being had.
Being a harmless science blogger, I've stayed out of the whole Harvard/Summers/women-versus-men tar pit. (Proof that I don't spend all my time fishing for traffic, as if posts on patent law weren't enough evidence already.) If you want, you can find more discussion of that controversy than you could want on any of the current-affairs blogs. But, still, I was struck by a comment from Virginia Postrel. She's discussing what might be done to increase the female presence in the sciences, given that biological clocks for reproduction work very differently for women and men (i.e., fathering a child at 45 is a lot easier than getting pregnant at that age. Neither Virginia nor I make any claims about the wisdom of doing either one; we're just talking biological feasibility):
"If, however, you spend six years in grad school and another two as a postdoc, you'll be 30 when you get your first tenure-track post--and that's assuming you don't work between college and grad school. I don't have the numbers, but science training is notorious for stretching out the doctoral/postdoc process, in part because the researchers heading labs benefit from having all that cheap, talented help. Female scientists who want kids are in trouble, even assuming they have husbands who'll take on the bulk of family responsibilities."
Fortunately, that long a stint in academia is unusual by chemistry standards, but molecular biology is notorious in just the way she's talking about. I've seen biology postdoctoral positions break up marriages, because the other partner eventually just wanted to finally, finally move on with life. Her suggested remedies?
"So, if a university like Harvard wants to foster the careers of female scientists, this is my advice: Speed up the training process so people get their first professorial jobs as early as possible--ideally, by 25 or 26. Accelerate undergraduate and graduate education; summer breaks are great for students who want to travel or take professional internships, but maybe science students should spend them in school. Penalize senior researchers whose grad students take forever to finish their Ph.D.s. Spend more of those huge endowments on reducing (or eliminating) teaching assistant loads and other distractions from a grad student's own research and training."
I got my first real PhD-level job at 27, after a year's post-doc, but that's a year or so younger than average for organic chemistry. I spent my undergraduate summer breaks doing research internships (of greater and lesser value), but I should make clear to those outside the field that graduate students in the sciences already work all through the summer. When I was in grad school, we watched the law students across the street pack up and leave in the spring while we cranked away in the lab days, nights, weekends, and holidays. I treasure a memo in my files from the chemistry department head, pointing out that the university vacation calendar did not apply to grad students - and he wasn't just talking about summers, of course. Do not, the memo warned, attempt to take all these holidays, things with names like "spring break", even though you may hear people talking about them.
As for Virginia's other prescriptions, I think penalizing slowpoke professors is a great idea. I know that some schools talk about doing this, but I've never seen any of them follow through. I think that the inverse idea, rewarding those research groups with a high percentage of students finishing on time, would be worth looking into as well. There are plenty of groups that could use a better work ethic - not in terms of the number of hours put in, but in terms of making sure that everything the students do is devoted to the great and holy cause of getting the hell out of graduate school. That's something you should do on general principles, man or woman, whether you plan to start a family or not. Grad school is for getting through, not for lingering.
Reducing TA assignments would also help. I know that many professors, if they have enough grant money, try to get their students out of teaching assistant positions as early as the university will let them (I did one year of it, the minimum.) But if you work for someone without as much of the ready cash, you can be TA-ing until your last year, and in an increasingly bitter mood about it, too.
Speeding up graduate education can be done. You don't want to turn out a bunch of unprepared losers, but as far as I can see, the system we have now does that anyway, but often too slowly. It's true that real research projects take time - you're never going to get well-trained chemistry PhDs out the door in two and a half years. But you shouldn't be expecting five and six years out of people as the norm.
These two posts (here and here) over at Uncertain Principles are well worth reading if you like discussions of the divide between people who understand science and people who don't. Chad Orzel, being a physicist, instantly translates "doesn't understand science" to "doesn't understand math", which is fair enough, especially for physics. His analogy to the language of critical theory, as found in English literature classes and the like, has threatened to turn the comments threads for both posts into debates about that instead, but Chad's doing a good job of trying to keep things on topic.
What he's wondering about, from his academic perspective, is how to teach people about science if they're not scientists. Can it be really done without math? He's right that a fear of mathematics isn't seen as nearly as much of a handicap as it really is, and he's also right that physics (especially) can't truly be taught without it. But I have to say that I think that a lot of biology (and a good swath of chemistry) can.
Or can they? Perhaps I'm not thinking this through. It's true that subjects like organic chemistry and molecular biology are notably non-mathematical. You can go through entire advanced courses in either field without seeing a single equation on a blackboard. But note that I said "advanced". I can go for months in my work without overtly using mathematics, but my understanding of what I'm doing is built on an understanding of math and its uses. It's just become such a part of my thinking that I don't notice it any more.
Here are some examples from the past couple of weeks: a colleague of mine spoke about a reaction that goes through a reactive intermediate, an electrically charged species which is in equilibrium with a far less reactive one (which doesn't do much at all.) That equilibrium is hugely shifted toward the inert one, but pretty much all the product is found to have gone through the path that involves the minor species. That might seem odd, but it's not surprising at all to someone who knows organic chemistry well. A less reactive species is, other things being equal, usually more energetically stable than a more reactive one, and the more stable one is (fittingly) present in greater amount. But since the two can interconvert, when the more reactive one goes on to the product, it drains off the less reactive one like opening a tap. There's a good way to sketch this out on a napkin, where the energy of the system is the Y coordinate of a graph - anyone who's taken physical chemistry will have done just that, and plenty of times.
Here's another: a fellow down the hall was telling us about a reaction that gave a wide range of products. Every time he ran one of these, he'd get a mix, and bery minor changes in the structure of the starting material would give you very different ratios of the final compounds. That's not too uncommon, but it only happens in a particular situation, when the energetic pathways a reaction can take are all pretty close to each other. The picture that came to my mind instantly was of the energy surface of the reaction system. Now, that's not a real object, but in my mental picture it was a kind of lumpy, rubbery sheet with gentle hills and curving valleys running between them. Rolling a ball across this landscape could send it down any of several paths, many of them taking it to a completely different resting place. Small adjustments from underneath the sheet (changing the height and position of the starting point, or the curvature of the hills) would alter the landscape completely. Those are your changes in the starting material structure, altering the energy profile of all the chemical species. A handful of balls, dropped one after the other, would pile up in completely different patterns at the end after such changes - and there are your product ratios.
Well, as you can see, I can explain these things in words, but it takes a few paragraphs. But there's a level of mathematical facility that makes it much easier to work with. For example, without a grounding in basic mathematics, I don't think that that picture of an energy surface would even occur to a person. I believe that a good grasp of the graphical representation of data is essential even for seemingly nonmathematical sciences like mine. If you have that, you've also earned a familiarity with things like exponential growth and decay, asymptotes, superposition of curves, comparison of the areas under curves and other foundations of basic mathematical understanding. These are constant themes in the natural world, and unless they're your old friends, you're going to have a hard time doing science.
That said, I can also see the point of one of his commentators that for many people, it would be a step up to be told that mathematics really is the underpinning of the natural world, even if some of the details have to be glossed over. Even if some of them don't hit you completely without the math, a quick exposure to, say, atomic theory, Newtonian mechanics, the laws of thermodynamics, simple molecular biology and the evidence for evolution would do a lot of folks good, particularly those who would style themselves well-educated.
Over at Sean Carroll's "Preposterous Universe", there's a post on a physicist's advice to students who want to become scientists. Don't even try, he tells them. No jobs, no money, no thrill, no hope. It's depressing stuff. Carroll is a physicist himself, so he has quite a bit to say on the topic. (Link found via yet another physicist.)
Reading the whole thing, though, I was struck by how far from my own experience it is. The drug industry's going through a rough patch, for sure, but there are companies still hiring. And although we've had some layoffs, and more are in the offing, there are still thousands upon thousands of us out here. We're gainfully employed, working on very difficult and challenging problems with large real-world implications. (And hey, we're getting paid an honest wage while we're doing it, too.)
That's when it hit me: the article that Carroll's referring to isn't warning people away from becoming scientists. It's warning them away from becoming physics professors. Very different! Those categories intersect, all right, but they're not identical. There are other sciences besides physics (no matter what Rutherford said), and in many of them, there's this other world called industry. (The original article doesn't even mention it, and Carroll disposes of in his first paragraph.)
Some of this is (doubtless unconscious) snobbery - academic science is pure science, after all, while industry is mostly full of projects on how to keep cat litter from clumping up in the bag or finding new preservatives for canned ravioli. Right? And some of it reflects the real differences between physics and chemistry. To pick a big one, research (and funding) in physics has been dominated for a long time by some Really Big Problems. The situation's exacerbated by the way that many of these big problems are of intense theoretical but hazy practical interest.
I am not knocking them for that, either, and I'll enter my recent effusions about the weather on Titan as evidence. I'd love to hear that, say, an empirically testable theory of quantum gravity has made the cut. But that kind of work is going to be the domain of academia. I think that it's a sign of an advanced civilization to work on problems like that, but advanced civilization or not, it's not likely to be a profit center. Meanwhile, chemistry doesn't have any Huge Questions at the moment, but what it has are many more immediately applicable areas of research. Naturally, there are a lot more chemists employed in industry (working on a much wider range of applications.)
Many of the other differences between the fields stem from that basic one. Chemistry has a larger cohort of the industrially employed, so the academic end of the business, while not a jolly sight, isn't the war of all against all that you find in physics, astronomy, or (the worst possible example) the humanities. The American Chemical Society's idea of worrisome unemployment among its members would be clear evidence of divine intervention in many other fields. So those of us who get paid, get paid pretty well. And we don't do three, four, five-year post-docs, either, which is something you find more of in fields where there aren't enough places for everyone to sit down. Two years, maximum, or people will think that there's something wrong with you.
All of this places us, on the average, in a sunnier mood than the physics prof who started this whole discussion (whose article, to be sure, was written four or five years ago.) I was rather surly during grad school, but for the most part I'm happy as the proverbial clam. As I've said, if someone had come to me when I was seven years old and shown me the work I do now, I would have been overjoyed. Who can complain?
You know what I don't miss about chemistry after years in the drug industry? Big, long, multi-step syntheses. Oh, we'll gear up to do eight- and ten- and thirteen-steppers here, even though some of those steps are just things like hydrolyzing methyl esters, stuff that blindfolded grannies should be able to do. But what I'm happy to leave the mighty academic natural product synthetic schemes behind, the ones where step fourteen finds you just getting warmed up.
As I've mentioned here before, I did that kind of thing in graduate school, and I swear it's scarred me for life. I pulled the plug on my total synthesis at step 27, about six steps short of the end (this is, if everything had worked perfectly, obese chance.) I've never regretted it. The benefits of getting out of grad school are huge, spacious, and well-appointed compared to the benefits of being able to say that I finished my natural product. Any of my readers in grad school, take note.
Long linear sequences are a slog. You have to start them in the largest buckets you can find, because you're never, ever going to have enough material. Now, we do large scale work in the drug industry, yes indeed, but that's because we intend to finish on large scale. If you're going to do six-week toxicity testing, you'd better have a fine keg of material on hand before you start. But those academic syntheses need huge amounts at the beginning in order to have anything at all by the time they finish. You work until you can't handle or characterize the stuff any more, then you trudge back down the mountain and start porting the loads back up the trail.
An example: I got to the point where I needed to take an optical rotation on the material from about step 25 or so. For those outside the field, this is an analytical technique that involves shining polarized light through a solution of your compound. If it's not an even mix of left-handed and right-handed isomers, that is to say, if there's some chiral character to the sample, the light will rotate. The degree of rotation can be used as an indicator of compound purity - I'm tempted to add "if you're a fool." They're not the most reliable numbers in the world, because some things just don't make the light twist much. And in those cases, a small amount of an impurity that rotates light like crazy will throw everything off. It's happened more than once.
Well, in my case, I loaded a half milligram or so of my precious stuff into the smallest polarimeter tube we had and jammed it into the machine. Hmm, I thought, a rotation of 0.00 degrees. A singular result, since I knew for certain that the molecule had six pure chiral centers. So I went back upstairs and loaded the whole batch into the tube, walking very carefully down the hall with this investment of several months of my life held in both sweaty hands. This time I got a specific rotation of about 1.2 degrees, which means that all those chiral carbons were roughly canceling each other out. Did I believe that number? Not at all! Did I put it in my dissertation? You bet! Gotta have a number, you know.
And that's how you work - purifying things through increasingly tinier columns, collecting them in slowly shrinking vials, running all the instruments for longer and longer with the gain turned up higher and higher, trying to prove that it's really still in there and really still what it's supposed to be. Then it's back to the buckets. Never again!
The October 29th issue of Science has an interesting article from a team at Stanford on a possible approach for Alzheimer's therapy. The dominant Alzheimer's hypothesis, as everyone will probably have heard, is that the aggregation of amyloid protein into plaques in the brain is the driving force of the disease. There's some well-thought-out dissent from that view, but there's a lot of evidence on its side, too.
So you'd figure that keeping the amyloid from clumping up would be a good way to treat Alzheimer's, and in theory you'd be correct. In practice, though, amyloid is extremely prone to aggregation - you could pick a lot of easier protein-protein interactions to try to disrupt, for sure. And protein-protein targets are tough ones to work on in general, because it's so hard to find a reasonable-sized molecule that can disrupt them. It's been done, in a few well-publicized cases, but it's still a long shot. Proteins are just too big, and in most cases so are the surfaces that they're interacting with.
The Stanford team tried a useful bounce-shot approach. Instead of keeping the amyloid strands off each other directly, they found a molecule that will cause another unrelated protein to stick to them. This damps down the tendency of the amyloid to self-aggregate. The way they did this was, by medicinal chemistry standards, simplicity itself. There's a well-known dye, the exotically named Congo Red, that stains amyloid very powerfully - which must mean that it has a strong molecular interaction with the protein. They took the dye structure and attached a spacer group coming off one end of it, and at the other end they put a synthetic ligand which is known to have high affinity for the FK506 binding protein (FKBP). That one is expressed in just about all cell types, and there are a number of small molecules that are known to bind to it.
The hybrid molecule does just what you'd expect: the Congo Red end of it sticks to amyloid, and the other end sticks to FKBP, which brings the two proteins together. And this does indeed seem to inhibit amyloid's powerful tendency for self-aggregation. And what's more the aggregates that do form appear to be less toxic when cells are exposed to them. It's a fine result, although I'd caution the folks involved not to expect things to make this much sense very often. That stich-em-together technique works sometimes, but it's not a sure thing.
So. . .(and you knew that there was going to be a paragraph like this one coming). . .do we have a drug here? The authors suggest that "Analogs based on (this) model may have potential as therapeutics for Alzheimer's disease." I hate to say it, but I'd be very surprised if that were true. All the work in this paper was done in vitro, and it's a big leap into an animal. For one thing, I'm about ready to eat my own socks if this hybrid compound can cross the blood-brain barrier. Actually, I'm about ready to sit down for a plateful of hosiery if the compound even shows reasonable blood levels after oral dosing.
It's just too huge. Congo Red isn't a particularly small molecule, and by the time you add the linking group and the FKBP ligand end, the hybrid is a real whopper - two or three times the size of a reasonable drug candidate. The dye part of the structure has some very polar sulfonate groups on it, as many dyes do, and they're vital to the amyloid binding. But they're just the sort of thing you want to avoid when you need to get a compound into the brain. No, if this structure came up in a random screen in the drug industry, we'd have to be pretty desperate to use it as a starting point.
Science's commentary on the paper quotes a molecular biologist as saying that this approach shows how ". . .a small drug becomes a large drug that can push away the protein. . ." But that's wrong. You can tell he's from a university, just by that statement. I'm not trying to be offensive about it, but neither Congo Red nor the new hybrid molecule are drugs. Drugs are effective against a disease, and this molecule isn't going to work against Alzheimer's unless it's administered with a drill press. If that's a drug, then I must have single-handedly made a thousand of them. The distance between this thing and a drug is a good illustration of the distance between academia and industry.
To be fair, this general approach could have value against other protein-protein interaction targets. I think that it's worth pursuing. But I'd attack something other than a CNS disease, and I'd pick some other molecule than Congo Red as a starting point.
OK, I couldn't resist. Let me reiterate that I completely admire the NIH's commitment to basic research; it's one of the real drivers of science in this country. But they're not a huge factor in clinical trials. Academia does more basic research than pharma; pharma does more clinical work than academia. Here are some statistics from a reader e-mail:
"As a person who was an NIH staffer (funding clinical trials, no less) and is now on the pharma side (mostly spending on manufacturing development; we will spend more on clinical trials as we get bigger), I have seen both sides.
Most of NIH spending is very far from clinical utility. Last time I checked (and it has been a while), more than 90% of NIH funds went to what most people would consider non-clinical research, e.g., studies of animals and cells, etc. (If the NIH was named by its major function, it would probably be called the National Institutes of Molecular Biology ;-) The reason NIH is able to claim that half of its money goes to clinical research' is that any study that involves a human or *human tissues* counts. So a bench study looking at receptors on human renal cells counts as 'clinical research.' The number of studies examining 'whole' humans is in the 5% range.
On the other hand, pharma, as you know, spends a lot of money on research with legal (protecting patent claims), manufacturing (cGMP issues, etc.) and marketing goals that don't necessarily help anyone's health.
Regarding the clinicaltrials.gov numbers, by my reckoning the 8000 NIH studies and the 2400 'industry' studies probably represent about the same investment in *therapeutic* clinical trials. If you break down the NIH trials, about 1800 (22%) are Phase I, 3000 (37%) are Phase II, 1100 (14%) are Phase III, and the rest (2150, 27%) are observational and other. (If you want to check, I did a search within the results for the appropriate phrases and subtracted from the total for the remainder). Figures for industry are 460 (19%) Phase I, 1060 (44%) Phase II, 770 (32%) Phase III, and 133 (5%) other.
In my experience each phase of clinical trials multiplies costs by about 10 times (e.g., Phase I = X; Phase II = 10X, Phase III = 100X), so the clinicaltrials.gov figures imply that the costs of Phase I, II, and III trials funded by industry are over 80% of those funded by NIH (costs are overwhelmingly driven by Phase III trials). And this is despite the close to 100% capture of NIH trials versus the unknown percentage capture of industry trials that you noted in your post."
OK, one more on this topic before moving on to other things for a while. The Bedside Matters medblog has a better roundup of the reactions to my post than I could have done myself. And "Encephalon" there also has one of the longer replies I've seen to my initial post, worth reading in full.
I wanted to address a few of the issues that it raises. Encephalon says:
"Dr. Lowe makes his point with the sort of persuasive skill one suspects is borne of practice - I shouldn't be surprised if he has had to make his case to the unbelieving on a very regular basis. And that case is this: that pharmaceutical companies do in fact spend enormous sums of money in developing the basic science breakthroughs first made in academic labs to the point where meaningful therapeutic products (ie, '$800 mil' pills) can be held in the palms of our doctors' hands, ready to be dispensed to the next ailing patient.
So far as that claim goes, I don't think any reasonably informed individual would dispute it. . ."
It tickles me to be called "Doctor" by someone with a medical degree. On the flip side, though, it's a nearly infallible sign of personality problems when a PhD insists on the honorific. And I appreciate the compliment, but it's only fairly recently that I've had to defend this point at all; I didn't even know it was a matter of debate. The thing is, you'd expect that a former editor of the New England Journal of Medicine would be a "reasonably informed individual", wouldn't you? I don't think we can take anything for granted here. . .
He then spends a lot of time on the next point:
"It is a myth, and I would argue a more prevalent one than the myth that Big Pharma simply leaches off government-funded research, that the NIH does little to bring scientific breakthroughs to the bedside (once they have made them at the bench). . .Using arguably one of the best (databases) we've got (the NIH's ClinicalTrials.gov**) we get the following figures: of the 15,466 trials currently in the database, 8008 are registered as sponsored by NIH, 380 by 'other federal agency', 4656 by 'University/Organization', and 2422 by Industry. While I am suspicious that the designation 'university/organization' is not wholly accurate, and may represent funding from diverse sources, and while the clinical trials in the registry are by no stretch of the imagination only pharmaceutical studies, the 8388 recent trials sponsored by Federal agencies are no negligeable matter. I think Dr. Lowe will agree.""
I agree that NIH has a real role in clinical trials, but I don't think it's a large as these figures would make you think. Clinicaltrials.gov, since it's an NIH initiative, is sure to include everything with NIH funding, but there are many industry studies that have never shown up there. (And I share the scepticism about the "University" designation.) When the Grand Clinical Trial Registry finally gets going, in whatever form it takes, we can get a better idea of what's going on. I also think that if we could somehow compare the size and expense of these various trials, the Pharma share would loom larger than the absolute number of trials would indicate.
Encephalon goes on to worry that I'm denigrating basic research: "The impression a lay person would get reading Dr. Lowe's 'How it really works' is that basic science work done by the NIH is really quite trivial. I don't think he meant this. . ."
Believe me, I certainly didn't. Without basic biological studies, there would be nothing for us to get our teeth into in the drug industry. If we had to do them all ourselves, the cost of the drugs we make would be vastly greater than it is now. It's like the joking arguments that chemist and pharmacologists have in industry: "Hey, you guys wouldn't have anything to work on if it weren't for us chemists!" "Well, you'd never know if anything worked if it weren't for us, y'know!" Academia and industry are like that: we need each other.
Here's another example of academica and industry, and how it can be hard to divide out the credit. There's a family of nuclear receptor proteins known as PPARs, a very important (and difficult to unravel) group. The whole field got started years ago, when it was noticed that some compounds had a very particular effect on the livers of rats and mice: they made the cells in them produce a huge number of organelles called peroxisomes.
Eventually, a protein was found that seemed to mediate this effect, and it was called the Peroxisome Proliferator-Activated Receptor, thus PPAR. It was thought that there might be some other similar proteins. At this point, their functions were completely unknown.
Meanwhile, off at a Japanese drug company, a class of compounds (thiazolidinediones) had been found to lower glucose in diabetic animal models. The original plan, if I recall correctly, had been to stich together a dione compound with a Vitamin E structure, and as it turns out the reasoning behind this idea was faulty in every way. But the Japanese group had hit on a whole series of interesting structures that lowered glucose in a way that had never been seen before. No one had a clue about how they worked, but all sorts of theories were proposed, tested, and discarded.
The activity was unusual enough that many other drug companies jumped into the thiazolidinedione game. It turned out, as various companies sought out patentable chemical space, that the Vitamin-E-like side chain wasn't essential, but the thiazolidinedione head group was a good thing to have. (It's since been superseded.) The Japanese group was in the lead, with a compound that was eventually named trogliazone, but SmithKline Beecham (as it was then) and Eli Lilly weren't far behind, with rosiglitazone and pioglitazone. There were a number of contenders from other companies fell out of the race for various reasons. The three left standing went all the way into human trials, and no one still had any idea of how they worked.
We're up to the early 1990s now. Off in another part of the scientific world, a number of research groups were digging into PPAR biology. It looked like there were three PPARs, designated alpha, gamma, and delta (known as PPAR beta in Europe.) They all had binding sites that looked like small molecules in the cell should fit into them, but no one had really established what they might be. All three seemed as if they might be important in pathways dealing with fatty acids, not that that narrows it down very much.
As best I can reconstruct things, in a very short period in the mid-1990s, it became clear that PPAR gamma was a big player in fat cells (adipocytes). Many labs were working on this, but two academic groups that were very much in the thick of things (and still are) were those of Bruce Spiegelman from Harvard and Ron Evans from the Salk Institute. Then a group at Glaxo Wellcome (as it was then), also doing research in the field, found out that the glitazone drugs were actually ligands for PPAR-gamma, and immediately hypothesized that it was the mechanism by which they lowered glucose. From what I've been told, Glaxo's management didn't immediately believe this, but it turned out to be right on the money. Glaxo is still a major player in the PPAR world, turning out a huge volume of both basic and applied research.
All three PPAR-gamma drugs made it to market. So, who gets the credit? It's hard enough to figure out even inside the academic sphere - the two groups I mentioned had plenty of competition here and abroad, and insights came from all over. But (as far as I can tell) none of them were the first to make the connection between PPAR-gamma and diabetes therapy. So does Glaxo get the credit (they do have a few key patents to show for it all.)
And if we're doling out credit, who's going to line up for blame? As it happened, the very first PPAR-gamma compound to market, troglitazone, showed some unexpected liver toxicity once it found a broader audience. It was eventually pulled from the market in a hail of lawsuits. Rosiglitazone and pioglitazone (Avandia and Actos, by brand) are still out there, having survived the loss of the first compound, but not without a period of suspicion and breath-holding.
Any more troubles to share? Later PPAR drugs have shown all kinds of weird effects, including some massive clinical failures late in human trials. The money that's been made from the two on the market probably hasn't made up yet for all the cash that the industry has spent trying to figure out what's going on, and the story takes on more complexity every year. (Glaxo, for their trouble, has never made a dime off one of their own PPAR compounds.)
It's to the point now that some companies are, it seems, throwing up their hands about the whole field, while others continue to plow ahead. And by now, the number of research papers from academia will make your head hurt. PPARs seem to be involved in everything you can imagine, from diabetes to cancer to wound healing, and who knows what else. The whole thing is going to keep a lot of people busy for a long time yet. And anyone who thinks they can clearly and fairly apportion the credit, the spoils, the blame and the Bronx cheers is dreaming.
My long cri de couer last week continues to bring in a number of comments, which I appreciate. Matthew Holt of the Health Care Blogasks:
How much money does the NIH spend on basic research and how much does the pharma business spend on it (and you can include development if you like)? I don't have these numbers but I suspect they are closer to each than it would appear from a reader of your article who might think that it's about 90-10 on pharma's side."
Well, I hope that's not how I came across. I'm sure that more basic research goes on in academia, of course. That's what they're funded for, and what they're equipped for. Some basic work goes on in the drug industry, too, but most of our time and effort is spent on applied research. It's confusion about the differences between those two (or an assumption that the basic kind is the only kind that counts) that leads to the whole "NIH-ripoff" idea.
It's easy to get NIH's budget figures, but it's next to impossible to get the drug industry's. One good reason is that companies don't release the numbers, but there's a more fundamental problem. It would even hard to figure it out from inside a given company, with access to all the numbers, because you can easily slip back and forth between working on something that applies only to the drug candidate at hand and working on something that would be of broader use.
Some years ago, several companies (particularly some European ones) had "blue-sky" basic research arms that cranked away more or less independently of what went on in the drug development labs. I can think of Ciba-Geigy (pre-Novartis) and Bayer as examples, and I know that Roche funded a lot of this sort of thing, too. In the US, DuPont's old pharma division had a section doing this kind of thing as well. I'm not sure if anyone does this any more, though. In many cases, the research that went on tended to either be too far from something useful, or so close that it might as well be part of the rest of the company.
So without a separate budget item marked "basic research", what happens is that it gets done here and there, as necessary. I can give a fairly trivial example: at my previous company, I spent a lot of time making amine compounds through a reaction called reductive amination. I used a procedure that had been published in the Journal of Organic Chemistry, a general method to improve these reactions using titianium isopropoxide. It worked well for me, too, giving better yields of reactions that otherwise could be hard to force to completion.
The original paper on it came from a research group at Bristol-Meyers Squibb. They had been looking for a way to get some of these recalcitrant aminations to go, and worked this one out. That is a small example of basic research - not on the most exalted scale, but still on a useful one. It's not like BMS had a group that did nothing but search for new chemical reactions, though. They were trying to make specific new compounds, applied research if there ever was some, but they had to invent a better way to do it.
Meanwhile, I needed some branched amines that this reaction wouldn't give me, and there wasn't a good way to make them. I thought about the proposed mechanism of the BMS reaction and realized that it could be modified as well. Adding an organometallic reagent at the end of the process might form a new carbon-carbon bond right where I needed it. I tried it out, and after a few tweaks and variations I got it to work. As far as I could see from searching the chemical literature, no one had ever done this in this way before, and we got a lot of use out of this variation, making a list of compounds that probably went into the low thousands.
When I was messing around with the conditions of my new reaction, trying to get it to work, I was doing it with intermediate compounds from our drug discovery program, and when the reactions produced compounds I submitted them for testing against the Alzheimer's disease target we were working on. Basic research or applied? Even though there are clear differences between the two, taken as classes, the border can be fuzzy. One's blue and one's yellow, but there's green in between.
Tomorrow I'll go over a more important example - it's pretty much basic research all the way, but untangling who figured out what isn't easy. My readers who work in science will be familiar with that problem. . .
One other thing, in response to another comment: I didn't go wild about the NIH argument because I'm trying to prove that drug companies are blameless servants of the public good or something. We're businesses, and we do all kinds of things for all kinds of reasons, which vary from the altruistic to the purely venal. You know, like they do in all other businesses. Nor is it, frankly, the largest or most pressing argument about the drug industry right now.
No, the reason I took off after it is that it's so clearly mistaken. Anyone who seriously holds this view is not, in my opinion, demonstrating any qualifications to being taken seriously. (And that goes for former editors of the New England Journal of Medicine, too, a position that otherwise would argue for being taken quite seriously indeed.) The "all-they-do-is-rip-off-academia" argument is so mistaken, and in so many ways, that it calls into question all the other arguments that a person advocating it might make. They are talking about the pharmaceutical industry, seriously and perhaps with great passion, but they do not understand what it does or how it works at the most basic level. Isn't that a bit of a problem? What other defects of knowledge or reasoning are waiting to emerge, if that one has found a home?
So is this the attitude we're up against? Here's a thread on Slashdot on the clinical trial disclosure issue - titled, I note in light of yesterday's post, "Medical Journals Fight Burying of Inconvenient Research". My favorite verb again! The comments range from the insightful to the insipid (for another good reaction to the clinical trial controversy, go here.)
A comment to the original Slashdot item disparages the idea that NIH is the immediate source of all drugs, and recommends reading my site, both of which actions I appreciate. But the first response to that was:
"No, (NIH-funded labs) just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well)."
Wrong as can be, and in several directions at once. In a comment below, blogger Sebastian Holsclaw urges that we take this kind of talk seriously because it's more widespread than we think. I'm afraid that he might be right. The problem is that many people don't seem to understand what it is that people like me do for a living. I think that there must be plenty who don't even grasp how science works in general. Allow me to go on for a while to explain the process - I'd appreciate any help readers can provide in herding the sceptics over to read it.
Try this: If Lab C discovers that the DooDah kinase (a name I whose actual use I expect any day now) is important in the cell cycle, and Lab D then profiles its over-expression in various cancer cell lines, you can expect that drug companies will take a look at it as a target. Now, the first thing we'll do is try to replicate some of the data to see if we believe it. I hope that I'm not going to shock anyone by noting that not all of these literature reports pan out.
But let's assume that they do this time, making DooDah a possible cancer target. What then? If we decide that the heavy lifting has been done by the NIH-funded labs C and D, then what do we have so far? We have a couple of papers in the Journal of Biological Chemistry (or, if the authors are really lucky, Cell) that, put together, say that DooDah kinase is a possible cancer target. How many terminally ill patients will be helped by this, would you say? Perhaps they can read about these interesting in vitro results on their deathbeds?
What will happen from this point? Labs C or D may go on to try to see what else the kinase interacts with and how it might be regulated. What they will not do is try to provide a drug lead, by which I mean a lead compound, a chemical starting point for something that might one day be a drug. That's not the business these labs are in. They're not equipped to do it and they don't know how.
(Note added after original post): This is where the drug industry comes in. We will try to find such a lead and see if we can turn it into a drug. If you believe that all of what follows still belongs to the NIH because they funded the original work on the kinase, then ask yourself this: who funded the work that led to the tools that Labs C and D used? What about Lab B, who refined the way to look at the tumor cell lines for kinase activity and expression? Or Lab A, the folks that discovered DooDah kinase in the first place twenty-five years ago, but didn't know what it could possibly be doing? These things end up scattered across countries and companies. And all of these built on still earlier work, as all the work that comes after what I describe will build on it in turn. That's science, and it's all connected.
Here in a drug company, we will express the kinase protein - and likely as not we'll have to figure out on our own how to produce active enzyme in a reasonably pure form - and we'll screen it against millions of our own compounds in our files. We'll develop the assay for doing that, and as you can imagine, it's usually quite different than what you'd do by hand on the benchtop. Then we'll evaluate the chemical structures that seemed to inhibit the kinase and see what we can make of them.
Sometimes nothing hits. Sometimes a host of unrelated garbage hits. For kinases, these days, these usually aren't the case - owing to medicinal chemistry breakthroughs achieved by various drug companies, let me add. So if we get some usable chemical matter, then I and my fellow med-chemists take over, modifying the initial lead to make it more potent, to increase its blood levels and plasma half-life when dosed in animal models, to optimize its clearance (metabolism by the liver, etc.), and make it selective for only the target (or targets) we want it to hit. Often there are toxic effects for reasons we don't understand, so we have to feel our way out of those with new structures, while preserving all the other good qualities. It would help a great deal if the compounds exist in a form that's suitable for making into a tablet, and if they're stable to heat, air, and light. They need to be something that can be produced by the ton, if need be. And at the same time, these all have to be structures that no one else has ever described in the history of organic chemistry. To put it very delicately, not all of these goals are necessarily compatible.
I would love to be told how any of this comes from the NIH.
Now the real work begins. If we manage to produce a compound that does everything we want, which is something we only can be sure of after trying it in every model of the disease that you trust, then we put it into two-week toxicity testing in animals. Then we test in more (and larger) animals. Then we dose them for about three months. Large whopping batchs of the compound have to be prepared for all this, and every one of them has to be exactly the same, which is no small feat. If we still haven't found toxicity problems, which is a decision based on gross observations, blood chemistry, and careful microscopic examination of every tissue we can think of, then the compound gets considered for human trials. We're a year or two past the time we've picked the compound by now, depending on how difficult the synthesis was and how tricky the animal work turned out to be. No sign of the NIH.
The regulatory filing for an Investigational New Drug needs to be seen to be appreciated. It's nothing compared to the final filing (NDA) for approval to market (we're still years and years away from that at this point), but it's substantial. The clinical trials start, cautiously, in normal volunteers at low doses, just to see if the blood levels of the compound are what we think, and to make sure that there's no crazy effect that only shows up in humans. Then we move up in dose, bit by bit, hoping that nothing really bad happens. If we make it through that, then it's time to spend some real time and money in Phase II.
Sick patients now take the drug, in small groups at first, then larger ones. Designing a study like this is not easy, because you want to be damn sure that you're going to be able to answer the question you set out to. (And you'd better be asking the right question, too!) Rounding up the patients isn't trivial, either - at the moment, for example, there are not enough breast cancer patients in the entire country to fill out all the clinical trials for the cancer drugs in development to treat it. Phase II goes on for years.
If we make it through that, then we go on to Phase III: much, much larger trials under much more real-world conditions (different kinds of patients who may be undergoing other therapy, etc.) The amount of money spent here outclasses everything that came before. You can lose a few years here and never feel them go by - the money that you're spending, though, you can feel. And then, finally, there's regulatory approval and its truckload of paperwork and months/years of further wrangling and waiting. The NIH does not assist us here, either.
None of this is the province of academic labs. None of it is easy, none of it is obvious, none of it is trivial, and not one bit of it comes cheap. We're spending our own money on the whole thing, betting that we can make it through. And if the idea doesn't work? If the drug dies in Phase II, or, God help us all, in Phase III? What do we do? We eat the expense, is what we do. That's our cost of doing business. We do not bill the NIH for our time.
Yochai Benkler of the Yale Law School has an interesting policy article in a recent issue of Science. It's on the "Problems of Patents", and he's wondering about the application of open-source methods to scientific research. He has two proposals, one of which I'll talk about today.
In some sort of ideal world (which for some folks also means Back In The Good Old Days of (X) Years Ago), science would be pretty much open-source already. Everyone would be able to find out what everyone else was working on, and comment on it or contribute to it as they saw fit. In chemistry and biology, the closest things we have now, as Benkler notes, are things like the Public Library of Science (open-source publishing) and the genomics tool Ensembl. Moving over to physics and math, you have the ArXiv preprint server, which is further down this path than anything that exists in this end of the world.
Note, of course, that these are all academic projects. Benkler points out that university research departments, for all the fuss about Bayh-Dole patenting, still get the huge majority of their money from granting agencies. He proposes, then, that universities adopt some sort of Open Research License for their technologies, which would let a university use and sublicense them (with no exclusivity) for research and education. (Commercial use would be another thing entirely.) This would take us back, in a way, to the environment of the "research exemption" that was widely thought to be part of patent law until recently (a subject that I keep intending to write about, but am always turned away from by pounding headaches.)
As Benkler correctly notes, though, this would mean that universities would lose their chance for the big payoff should they discover some sort of key research tool. A good example of this would be the Cohen/Boyer recombinant DNA patent, licensed out 467 times by Stanford for hundreds of millions of dollars. And an example of a failed attempt to go for the golden gusto would be the University of Rochester's reach for a chunk of the revenues from COX-2 inhibitors, despite never having made one. (That's a slightly unfair summary of the case, I know, but not as far from reality as Rochester would wish it to be.)
That's another one I should talk about in detail some time, because the decision didn't rule out future claims of that sort - it just said that you have to be slicker about it than the University of Rochester was. As long as there's a chance to hit the winning patent lottery ticket, it's going to be hard to persuade universities to forgo their chance at it. Benkler's take is that the offsetting gains for universities, under the Open Research License, would be "reduced research impediments and improved public perception of universities as public interest organizations, not private businesses." To compensate them for the loss of the chance at the big payoff, he suggests "minor increases in public funding of university science."
Laudable. But will that really do it? As far as I can tell, most universities are pretty convinced already that they're just about the finest public interest organizations going. I'm not sure that much need for good publicity, rightly or not. And Benkler's right that a relatively small increase in funding would give universities, on average, what they would make, on average, from chasing patent licensing money. But show me a university that's willing to admit that it's just "average."
The problem gets even tougher as you get to the research departments that really aren'taverage, because they're simultaneously the ones with technologies that would be most useful to the broader research community and the ones with the best chance of hitting on something big. I'll be surprised - pleasantly, but still very surprised - if the big heavy research lifters of the world agree to any such thing.
It's been a while since I returned to this topic. Many differences remain for me to talk about, but I though that it was time to address the biggest one, which is psychological. Some of you probably thought that the biggest difference was money. Can't ignore that one - it probably contributes to some of the effects I'll be talking about. But there's a separate mental component to graduate school that never really recurs, which should be good news to my readers who are working on their degrees.
Some of this is due to age, naturally enough. The research cohort out in industry ranges from fresh-out-of-school to greybeards in their fifties and sixties. (I can say that, since I'm in my early forties, the color changes in my own short beard notwithstanding.) Everyone in graduate school is a transient of one sort of another, usually someone whose life is still just getting going. But in the workplace, most people are more settled in their lives and careers. There are still some unsettling waves that move through industry, mergers and layoffs and reorganizations. But people respond to them differently than they would in their 20s - often better, sometimes worse, but differently.
And not all your co-workers in grad school are actually stable individuals, either. Some of these people wash out of the field for very good reasons, and you don't see as many of the outer fringes later on in your career. It's not that we don't have some odd people in the industrial labs, believe me. But the variance isn't as high as it is in school. Some of those folks are off by so many standard deviations that they fall right off the edge of the table.
Another factor is something I've already spoken about, the way that most graduate careers come down to one make-or-break research project. The only industrial equivalents are in the most grad-school atmospheric edge of the field, small startup companies that have one shot to make it with an important project. But in most companies, no matter how big a project gets, there's always another one coming along. Clinical candidate went down in flames? Terrible news, but you're working on another one by then. There's a flow to the research environment that gives things more stability.
The finish-the-project-or-die environment of graduate study leads to the well-known working hours in many departments. Those will derange you after a while: days, nights, weekends, holidays, Saturday nights and Sunday mornings. I worked 'em all myself when I was trying to finish my PhD, but I don't now. If a project is very interesting or important, I'll stay late, or once in a while work during a weekend. But otherwise, I arrange my work so that I go home at night. For one thing, I have a wife and two small children who'd much rather have me there, but even when I was single I found many more things to do than work grad-school hours. It took me some months after defending my dissertation before I could decompress, but I did. Having a life outside the lab is valuable, but it's a net that graduate students often have to work without.
But beyond all these, there's one great big reason for why grad school feels so strange in retrospect, and I've saved it for last: your research advisor. There's no other time when you're so dependent on one person's opinion of your work. (At least, there had better not be!) If your advisor is competent and even-tempered, your graduate studies are going to be a lot smoother. If you pick one who turns out to have some psychological sinkholes, though, then you're in for a rough ride and there's not much that can be done about it. Everyone has a fund of horror stories and cautionary tales, and there's a reason for that: there are too damn many of these people around.
Naturally, there are bad bosses in the industrial world. But, for the most part, they don't get quite as crazy as the academic ones can (there's that variance at work again). And they generally aren't the only thing running (or ruining) your life, either. There's the much-maligned HR department, which can in fact help bail you out if things get really bad. Moving from group to group is a lot easier at most companies than it can ever be in graduate school, and it's not like you lose time off the big ticking clock when you do it.
I can see in retrospect that I was a lot harder to get along with when I was in grad school. I responded to the pressure by getting more ornery, and I think that many other personalities deformed similarly. When I've met up with my fellow grad students in the years since, we seem to be different people, and with good reason. It isn't just the years.
The April issue of Drug Discovery Today has an intriguing interview (PDF file) with Stuart Schreiber of Harvard. Schreiber is an only partially human presence in the field, as a listing of his academic appointments will make clear: chairman, with an endowed professorship, of the Department of Chemistry at Harvard, investigator at the Howard Hughes Medical Institute, director of the NIH's Initiative for Chemical Genetics, faculty member of the joint Harvard/MIT Broad Institute (a genomic medicine effort), affiliate of Harvard's Department of Molecular and Cellular Biology and Harvard Medical School's Department of Cell Biology, member of Harvard's graduate program in Biophysics and the medical school's Immunology Department, a player in the early years of Vertex, founder of ARIAD Pharmaceuticals and Infinity Pharmaceuticals, and founding editor of Chemistry and Biology. (What other name would the journal have?)
Schreiber is extremely accomplished and intelligent, but he can also be quite hard to take. A powerful pointer to this tendency comes when the interviewer asks him about who's been his greatest inspiration - he leads off with Muhammed Ali and Neil Cassidy, and for better or worse, that's just about the size of it. Mix those two together, give the resulting hybrid a burning interest in chemical biology and a chair at Harvard, and there you are.
I've not met him personally, but I've heard him lecture more than once. The first time I saw him, he was speaking on one of his big stories from past years, the immunomodulator FK-506. He hit the afterburners during the first slide and ascended into the stratosphere, leaving us ground-based observers with only a persistent vapor trail. Slide after slide came up, densely packed with years of data in a punishing, torrential rush - after a while, people in the audience were clutching their heads as their pens clattered to the floor. Some of my readers will, I think, have had similar Schreiberian experiences.
And the guy has no problem with saying just what's on his mind, although if I had those faculty positions, I'd wouldn't be feeling too many restraints myself. It's a mixed blessing. Some of what he's got to say is very sensible, even if no one else feels like saying it in so many words, but he can also come across as divorced from reality and impossibly arrogant. I would have to think that a post-doctoral position with him would be a rather stimulating experience, which would doubtless take place during days, nights, weekends, major and minor holidays, and probably during periodic flashback dreams in the years to come.
The interview starts out by asking Schreiber what he thinks of the new NIH Roadmap initiative. He sounds the alarm, correctly, about one thing it seems to emphasize:
". . .what is perhaps surprising to some people is how much emphasis the NIH has placed on small molecules and screening in an academic environment. A meeting with some senior pharma industry executives made me realize that there are many people who are unhappy with this activity. When I went back and read what is being proposed, some of the language suggests that the plan is to fund early drug discovery and development in an academic environment.
Yet some of the language also suggests that the Roadmap is about a parallel process of using chemistry and small-molecule synthesis and screening to interrogate biology. In this model, a parallel set of techniques is involved but the overall goals are very different. I am equally concerned as the pharma industry if the Roadmap were to place too much emphasis on the first model, because I think that a focus on drug discovery in academic would represent a missed opportunity. Sending the message to groups of industry-naÔve biologists and chemists that they should now try to discover drugs in their labs could be problematic for a variety of reasons."
He's right on target there, I have to say. And what does this do to the arguments some people make that just about all the research the drug industry does is ripped off from NIH-funded work? (I've mentioned this topic before; as we get the archives working again I'll group those posts together with this one.) Schreiber goes on to point out that drug development works completely differently from academic research, and that mixing the two might well end up compromising the strengths of each.
Academia should do what it does best: exploration, discovering new islands and continents of knowledge that no one even knew were there. We in industry can do some of that, but our strong suit is finding concrete uses for such discoveries. We're good at doing the detail work of developing them into something that works feasibly, reproducibly, safely, and (dare I mention) profitably. Getting all those to happen at the same time is no mean feat, as any engineer or applied-research types will tell you at length.
I'll have more to blog on the Schreiber interview; not everything he says in it is quite so sensible. But this point was worth some craziness. I'd like to take some of the folks who try to tell me that the whole pharma industry is some sort of profit-seeking leech on the NIH-funded world and lock them in a room with the guy and a couple of projectors. As long as I could be around as his audience staggered out, groping for painkillers and rubbing their eyes. . .
One of the main things I noticed when I joined the pharmaceutical industry (other than the way my black robe itched and the way the rooster blood stained my shoes, of course) was how quickly one moved from project to project. That's in contrast to most chemistry grad-school experiences, where you end up on your Big PhD Project, and you stay on that sucker until you finish it (or until it finishes you.)
My B.PhDP. was a natural product synthesis, and I had plenty of time to become sick of it. My project seemed to be rather tired of me, too, judging by the way it bucked like a mad horse at crucial stages. Month after month it ground on, and the time stretched into years. And I was still making starting material, grinding it out just the way I had two years before, the same reactions to make the same intermediates, which maybe I could get to fly in the right direction this time. Or maybe not. . .time to make another bucket of starting material, back to the well we go. . .
Contrast drug discovery: reaction not working? Do another one. There's always another product you can be making - maybe this one will be good. Project not going well? Toxicity, formulation problems? Everyone will give it the hearty try, but after a while, everyone will join in to give it the hearty heave-ho, because something else will come along that's a better use of the time. Time's money.
It keeps you on your toes. You have to learn the behavior of completely new classes of molecules each time - no telling what they'll be like. You dig through the literature, try some reactions, and get your bearings quickly, because you don't have weeks or months to become familiar with things. The important thing is to get some chemistry going. If it doesn't make the product you expected, then maybe it'll make something else interesting. Send that in, too. You never know.
A reader's e-mail got me thinking about this topic. It's worth a number of posts, as you'd guess, since there are many substantial differences. Some are merely of degree (funding!), while others are of kind.
But the funding makes for larger changes than you'd think, so I'll get that one out of the way first. When I was in graduate school, my advisor's research group was actually pretty well-heeled. We had substantial grant money, and none of us had to be teaching assistants past our first year. But even so, we had to watch the expenditures. For example, we didn't order dry solvents, in their individual syringable bottles, from the chemical companies because those were too expensive. Instead, we had our solvent stills, which (to be fair) produced extremely good quality reagents at the price of the occasional fire.
Grad student labor is so cheap it's nearly free, so making expensive reagents was more cost-effective than buying them. (At least, it was if you weren't the person making them.) I had a starting material that's produced from pyrolysis of corn starch (levoglucosan, it's called, and I'd be happy to hear from anyone who's worked with the stuff.) At the time, it sold for $27 per 100 milligrams, and since I used it in fifty-gram batches, that was out of our price range for sure.
So I pyrolyzed away, producing tarry sludge that had to be laboriously cleaned up over about a week to give something that would crystallize. (I saved the first small batch that did that for me back in the summer of 1984, and it's sitting in the same vial right next to me as I write. The label looks rather distressingly yellowed around the edges, I have to say.) A kilo of corn starch would net you about fifty grams of starting material, if everything worked perfectly. And if it didn't, well, I just started burning up another batch, because it's not like I had anything to do that Sunday night, anyway.
When I got my first industrial job, it took me a while to get all this out of my system. I needed an expensive iron complex at one point, about six months into my work, and sat down to order the things I needed to make it. My boss came by and asked what I was up to, and when I told him, asked me how much the reagent itself would cost. "About 900 dollars", I told him, whereupon he told me to forget it and just order the darn stuff. He pointed out that the company would spend a good part of that price just on my salary in the time it would take me to make it, and he was right, even at 1989 rates.
So we throw the money around, by most academic standards. But there can be too much of a good thing. There's a famous research institute in Europe, which I'm not quite going to name, that was famously well-funded for many years. They had a very large, very steady stream of income, and it bought the finest facilities anyone could want. Year after year, only the best. And what was discovered there, in the palatial labs? Well, now and then something would emerge. But nothing particularly startling, frankly - and from some of the labs, nothing much at all. You'd have to have a generous and forgiving spirit to think that the results justified the expenditure. There are other examples, over which for now I will draw the veil of discretion.
Greg Hlatky over at A Dog's Lifeis right on target in his post of Tuesday the 24th. And that's not just because he said that my posts always make him think - of course, he could always be thinking "What's with this maniac, anyway?"
No, he's completely correct about the uses of time and money in academia versus industry. He points out that:
Industry and academia each have major constraints. At colleges and universities, it's money. Money is always in short supply and grants have to be used to cover the administration's greed in charging overhead, tuition and stipend for the students, purchase of laboratory chemicals and equipment, and so on. The money never seems enough and professors are always rattling their begging cups with funding agencies to continue their research.
What graduate programs have lots of is time and people. Research groups have hordes of post-docs and graduate students who can be kept working 16 hours a day, seven days a week, since graduate school is the last bastion of feudalism. The product of these two factors is a maniacal stinginess about chemicals and equipment - acetone and deuterated solvents are recycled, broken glassware is patched up over and over, syntheses start from calcium carbide and water - combined with a total lack of concern as to whether these rigors are time-efficient.
Oh, yeah. And it gets perpetuated as well by the feeling that if you're in the lab all day and all night, you must be productive - no matter how worthless and time-wasting the stuff you're doing. I've seen a number of people fall into that trap; I've fallen into it myself.
For a good example of the attitude Greg's talking about, see the recent long article by K. C. Nicolau in Angewandte Chemie. It's an interesting synthetic story, that's for sure (Nicolau and his group don't work on any boring molecules.) But it's marred by mentions of how this reaction was done at 2 AM, and how this sample was obtained on Christmas Eve, and how when I walked into the lab at 6 AM on Sunday, my people rushed up with the latest spectrum. . .there's just no need for this sort of thing. Of course, Nicolau's people work hard - they couldn't make the things they make, as quickly as they make them, without working hard.
I recall during my first months in industry when it finally dawned on me that it was a lot better idea to order expensive reagents rather than make them, considering what I got paid and what delays would cost the projects I worked on. A liberating feeling, I can tell you. I've never looked back. Since then, I can spend a departmental budget with the best of 'em.