Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
A query from a reader prompts me to ask this question, in preparation for a rather long post in the new future. What do you think is the most worthwhile new pharmaceutical brought to market since 1990? That's an arbitrary cutoff, but twenty years is a reasonable sample size. And I'll let everyone define "worthwhile" as they see fit - improvement over existing drugs, opening new therapeutic areas, cost-effectiveness, what have you. Just be sure to make your case, briefly, when you nominate a candidate. Let's see, first off, if it's a topic that can be agreed on at all.
Past results, they tell you, are no guarantee of future performance. Sanofi-Aventis is ready to tell you all about that after the results of a Phase III trial of their recently acquired oncology drug, iniparib (BSI-201). This had shown very strong results in Phase II against "triple-negative" breast cancer, but it appears to have missed two survival endpoints in a larger trial. Sanofi bought BiPar, the company that had been developing the drug, a little less than two years ago.
Iniparib's a small molecule indeed - small enough that its systematic name can be immediately parsed by any sophomore chemistry student. It's 4-iodo-3-nitrobenzamide; it's the sort of thing you can order out of a catalog. But it's also an inhibitor of poly-(ADP-ribose) polymerase I (PARP1), and it's the first compound of that class to get this far in the clinic. PARP1 is part of a DNA repair pathway, although it's not on the front line. That would be homologous recombination, which is the pathway that needs the well-known BRCA to function. The idea has been that since so many aggressive breast cancers are deficient in BRCA, that they'd be especially sensitive to something that targeted PARP as well - they should accumulate so many DNA breaks that they'd be unable to replicate.
That's a perfectly reasonable theory. But it doesn't seem to have yielded perfectly reasonable results in this case. Problem is, PARP1 has a lot of functions in the cell, and inhibiting the lot of them all at once may not be such a good idea. One possibility is that effects on the Akt pathway might boomerang and reduce the effectiveness of therapy.
More broadly, this is yet another illustration of the perils of Phase II data. And it does make a person think about the idea of tightening up the endpoints of such trials even more. Problem is, you often don't get good survival numbers until Phase III, anyway, by which time you've spent the money. Like Sanofi-Aventis is spending it now. Let's hope that one of the other indications for the drug works out better.
Update: here's a rundown on competition in this field. The next round of clinical data will be quite interesting. . .
One of the authors (Mostafa Fekry) of the paper mentioned in my last post is at Cairo University. Which means that things must be rather uncertain for him right now, as it is for everyone in Egypt.
Readers will recall the mentions here of the 2009 unrest in Iran (behind-the-scenes note: my wife is Iranian), and this seems to have moved rapidly to an even more extreme stage. I have to say, I don't mind seeing autocrats and dictators (and their security forces) chased through the streets. I do wonder, though, what might replace them (which speculation seems to be helping tank the stock market today). Let's hope for the best.
I advised readers during the most recent Iran unrest (there will be more, I'm sure) to pitch in by helping to run Tor relays. This time, though, since the Egyptian government seems to have pulled the internet plug completely out of the wall, in what must (economically and socially) be a shower of plaster fragments, that may not do as much good. But events are young.
Here's the first response in the chemical literature to the arsenic-in-DNA controversy, from three authors in ACS Chemical Biology. They detail the argument, familiar to readers of the comment section here, that arsenate esters just would not be expected to have the hydrolytic stability needed for arseno-DNA to function usefully.
How far off is it? By, well, about 13 (make that 17) orders of magnitude, which is much worse than I'd thought. As the authors put it, "Overcoming such dramatic kinetic instability in its genetic material would present serious challenges to Halomonadacea GFAJ-1." Indeed it would.
A reader sent this along to me, and I figured that many folks who are in (or have been through) academia can relate. This is the Hui Zheng lab at Baylor, with their Gaga-esque production of. . .Bad Project:
Congratulations to them. It's a good thing that there was no YouTube back when I was in that position, or I might have gotten myself in a lot of trouble. . .
I wish that more journals did this! Environmental Microbiology, which I have never looked at before, has published its favorite reviewer comments from the year just passed. They're not tied to the papers that generated them, naturally, but then, many of these manuscripts didn't quite make the cut:
"The biggest problem with this manuscript, which has nearly sucked the will to live out of me, is its terrible writing style."
"I usually try to be nice, but this paper has got to be one of the worst I have read in a long time."
"I suppose that I should be happy that I don't have to spend a lot of time reviewing this dreadful paper, however, I am depressed that people are performing such bad science."
"It is sad to see so much enthusiasm and effort go into analyzing a dataset that is just not big enough."
There are plenty more, including many from people who are actually happy about what they had to read (and yes, there are some). Check 'em out!
Well, the snow is now well up over my knees here at headquarters, which accounts for the lack of posting today. There's only so much the snowplow guy can do out there; it's hard to fine places to shove the stuff to. I've been out moving piles of it around, off the driveway and off the roof, which leaves little time for Science. More tomorrow!
Um, I mean more science. Not more snow. At least, I sure hope not.
Abbott is announcing 1,900 layoffs, about 2% of the company's work force. That's on top of the 3,000 that had been announced last fall, and this is not getting 2011 off to a very good start, is it? I'm told by primary sources that there have been cuts in research as a part of this latest round, but I don't have any firm numbers yet. . .
So, me-too drugs, knock-offs, copycats: what say you? If you're a critic of the industry, you generally say quite a bit, and it's about lack of innovation, seeking easy profits and playing it safe, putting marketing over science, and so on. But what if that's not true?
We've talked about this here before, but now we can put some numbers on the topic, thanks to this article in Nature Reviews Drug Discovery. The authors have covered a lot of ground, looking at first-in-class drugs approved from the early 1960s up to 2003, with later entrants in the same areas accepted up to 2007. There are 94 of those different therapeutic classes over that period, with a total of 287 follow-on drugs coming after the pioneer compounds in each. So there you have it - case closed, eh?
Not so fast. Look at the timing. For one thing, over that nearly 50-year period, the time it takes for a second entry into a therapeutic area has declined steeply. Back in the 1970s, it took over nine years (on average) for another drug to come in and compete, but that's gone down to 1.7 years. (The same sort of speed-up has taken place for third and later entries as well). Here's what that implies:
Implicit in some of the criticism of the development of me-too drugs has been the assumption that their development occurs following the demonstration of clinical and commercial success by the first-in-class drug. However, given assessments of the length of time that is typically required for drug development — estimated at between 10 to 15 years — the data on the timing of entry of follow-on drugs in a particular class, in this study and in our previous study, suggest that much of the development of what turn out to be follow-on drugs must occur before the approval of the breakthrough drug.
That it does, and the overlap has been increasing. I've been in the drug industry since 1989, and for every drug class that's been introduced during my career, at least one of the eventual follow-on drugs has already been synthesized before the first one's been approved by the FDA. In fact, since the early 1990s, it's been the case 90% of the time that a second drug has already filed to go into clinical trials before the first one has been approved, and 64% of the time another compound has, in fact, already started Phase III testing. Patent filings tell the story even more graphically, as is often the case in this industry. For new drug classes approved since the 1970s, 90% have had at least one of the eventual follow-on drugs showing its first worldwide patent filing before the first-in-class compound was approved.
So the mental picture you'd get from some quarters, of drug companies sitting around and thinking "Hmmm. . .that's a big seller. Let's hang a methyl off it now that those guys have done the work and rake in the cash" is. . .inaccurate. As this paper shows (and as has been the case in my own experience), what happens is that a new therapeutic idea becomes possible or plausible, and everyone takes off at roughly the same time. At most, the later entrants jump in when they've heard that Company X is working in the same area, but that's a long time before Company X's drug (or anyone's) has shown that it's going to really work.
If you wait that long, you'd be better off waiting even longer to see what shortcomings the first drug has out in the real marketplace, and seeing if you can overcome them. Otherwise, you're too late to go in blind (like the first wave does). And blind it is - I can't count the number of times I've been working on a project where we know that some other company is in the same area, and wondering just how good their compound is versus ours. If you know what the structure is (and you don't always), then you'll make it yourself and check your lead structure out head-to-head in all the preclinical models you care about. But when it comes to the clinical trials, well, you just have to hold your breath and cross your fingers.
I'll let the authors sum things up:
Overall, these results indicate that new drug development is better characterized as a race to market among drugs in a new therapeutic class, rather than a lower risk imitation of a proven breakthrough. . .a race in which several firms pursue investigational drugs with similar chemical structures or with the same mechanism of action before any drug in the class obtains regulatory marketing approval. So, the distinctions that are often drawn between the relative innovative value of the development of the first-in-class and the me-too drugs in the same class may be misguided. . .
Several people have asked me about this recent press conference, where two Italian researchers (Andrea Rossi and Sergio Focardi) say that they have demonstrated anomalous nuclear reactions with nickel and copper, on a scale sufficient to produce electrical power. (To be technical, it's probably not fusion per se, but is it anything, and if so, what)?
I hope that they're right, naturally. But there are a lot of things to wonder about. They chose to announce this at a press conference, and to "publish" in a journal that actually doesn't exist. Rossi himself seems to have had some criminal problems with the Italian authorities in the past. All this does not inspire confidence (says the blogger in a scrupulously neutral tone of voice). And this whole area is absolutely saturated with cranks, sharp operators, self-deceivers, paranoids, and loose cannons of every description. I continue to think that these phenomena (if there are phenomena there at all) are worthy of study, but man, the signal-to-noise ratio in this field just could not be worse. The legitimate scientists working in it (and there are some) have my sympathy.
For what it's worth, this latest work seems to follow up on some earlier reports from another Italian physicist, Francesco Piantelli. That link, a blog written by a sceptical enthusiast, will probably tell you more than you want to know about the story, and a look through its other posts will tell you plenty about the state of the whole field. I'm going to take the same course of action that I have with all purported new energy breakthroughs in the last twenty years: wish the participants good luck, hope that they've actually found something worthwhile, and sit back to watch. If anyone does make a breakthrough, it's going to be abundantly clear. If, on the other hand, the people involved are still flopping around and issuing press releases year after year, then they're probably still having to pay their own electric bills.
Well, no sooner do I speculate about whether Luc Montagnier has lost it then he makes headlines with a "water memory" story about teleporting DNA. There are, of course, umpteen reasons for this not to be a real result. We'll start with contamination of vials, which in a system like PCR can be disastrous, and work from there. The other major problem I have with this is one of the major problems I have with homeopathy: if incredibly small dilutions of things have such an effect, then why aren't we seeing it happen all the time? There are tiny amounts of DNA everywhere: how come all our experiments aren't turning into fuzzy blurs of results from all the small but oh-so-powerful fragments and traces in every sample?
Well, Montagnier himself says that he thinks that this experiment will be replicated by others, so I'll hold my fire until that's tried out. Until then, I note that this experiment has apparently made Deepak Chopra's day. It's hard for me to imagine that anything that has inspired such a fuzzy-brained column from such a fuzzy-brained man could lead to any good. But perhaps we'll all be surprised.
Here's a topic that's come up here before: for a new cancer drug, how much benefit is worthwhile? As it stands, we approve things when they show a statistically meaningful difference versus standard of care (with consideration of toxicology and side effects). But should our standards be higher?
That's what this paper in the Journal of the National Cancer Institute is proposing. The authors look at a number of recent Phase III trials for metastatic solid tumors. It's a tricky business:
When designing a randomized phase III clinical trial, the investigators must specify in the protocol the difference (δ) in the primary endpoint between experimental and control groups that they aim to detect or exclude (24). The number of patients to be recruited and the duration of the study will depend on the value of δ; increasing the sample size will allow the detection or exclusion of smaller values of δ. Ideally, trials should be designed such that δ represents the minimum clinically important difference, taking into account the tolerability and toxicity of the new treatment, that would persuade oncologists to adopt the new treatment in place of the standard treatment. Of course, the opinions of oncologists as to what constitutes a minimal important value of δ will vary, but a reasonable consensus can be reached by seeking the opinions of oncologists who manage a given type of cancer. For example, an increase in median survival by less than 1 month for patients with advanced-stage cancer would not be regarded by most as clinically important, unless the new agent had less toxicity than standard treatment, whereas an improvement of median survival by greater than 3 months for a drug that was reasonably well tolerated would usually be accepted as clinically important.
And the problem is, given the costs of some of these drugs versus their benefits, you run the risk of, finally, paying too much for too little. I know that people say that you can't put a cost on a human life, but that's probably not true, when you're talking about an entire economy. As the article points out, the rough estimate is that the developed world can support expenditures of up to roughly US $100,000 per year of life gained, but past that, we're into arguable territory. (If someone wants to spend more out of their own pocket, that's another matter, naturally, but at these levels, we're usually talking public and private insurance).
The benefits can indeed be marginal, and you have to look at the statistics carefully so as not to be misled:
. . .several trials showed a statistically significant difference in a major outcome measure between the experimental and control groups, but the difference in outcome was of lower magnitude (eg, hazard ratio was closer to one) than that specified in the protocol. For example, the clinical trial that led to approval of erlotinib for treatment of pancreatic cancer was designed to detect a relative risk reduction of 25% (HR ≤ 0.75), but the best estimate of hazard ratio from the trial showed a relative risk reduction of 18% (HR = 0.82, 95% confidence interval = 0.69 to 0.99). The difference was statistically significant (P = .038), but the median survival differed by only 10 days.
What happens is that the trials are (understandably enough) designed to detect the minimum difference that regulatory authorities are likely to find convincing enough for approval of the drug. And the FDA has generally set the bar at "anything that's statistically significant for overall survival". These authors (and others) would like to see that raised. They're calling for trials not to go for a statistically significant P value, so much as to show some sort of meaningful clinical benefit - because it's become clear that you can have the first without really achieving the second.
I think that might be a good idea, whether or not you buy into that cost-per-year-of-life figure or not. At this point, I think it's fair to say that we can come up with drugs that provide some statistical measure of efficacy, given enough effort in the clinic, for many kinds of cancer (although certainly not all of them). But how many add-a-month-maybe therapies do we need? Not everyone's convinced, though:
Wyndham Wilson, a lymphoma researcher at the National Cancer Institute in Bethesda, Maryland, argues that the proposed clinical endpoints are somewhat arbitrary. “What constitutes a clinically meaningful difference? Six months is obvious, but where do you cut the line?” What's more, he adds, simply focusing on median responses often ignores important outlier effects that could merit approval for an experimental drug. “The difference in overall survival may not be great, but it may be driven by a great benefit to a small group,” he says.
Problem is, it's often quite difficult to figure out who that small group might be, and just treat them, instead of treating everyone and hoping for the best. And there's always the argument that these therapies are stepping stones to more significant improvements, but I wonder about that. My impression of oncology research has always been more like "OK, this looks reasonable. Lots of these tumors have UVW upregulated; let's make an UVW inhibitor. (Years later): Hmm, that's disappointing. Our UVW inhibitor doesn't seem to do as much as you'd think it should. But now it's been found that XYZ looks like it's necessary for tumor growth; let's see if we can inhibit it. (Years later): Hmm, that's not as big an effect as you would have thought, either, is it? Seems to help a few people, but it's hard to say who they'll be up front. How's the JKL antagonist coming along? No one's tried that yet; looks like a good cell-division target. . ."
It's just sort of one thing after another - that one didn't work so well, neither did that one, this other one and these three together seem to be a bit better, but not always, and so on. Would we learn as much, or nearly so, just from the earlier clinical work on such compounds as opposed to taking them to market? And although you can't deny that there's been incremental progress, I'm not sure what form it's taking. It's very likely that the answer isn't to keep turning over mechanistic ideas until we find The One That Really Truly Works - cancer is a tough enough (and varied enough) disease that there probably isn't going to be one of those.
My guess is that meaningful cancer success will come from combinations of therapies that we mostly don't even have yet. I think that we'll need to hit several different mechanisms at the same time, but that some of what we'll need to hit hasn't even been discovered. And on top of that, each patient presents a slightly different problem, and ideally would receive a more customized blend of therapies (not that we know how to do that, either, in most cases).
What I'm saying is that we'll probably need combinations of things that already work better than most of what we have already, and that these will stand out enough in clinical trials that we'll know that they're worth developing. As it stands, though, companies see hints here and there in the clinic, enough to run a Phase III trial, and if it's large enough and tightly controlled enough, they see enough efficacy to get things through the FDA and onto the market. Would we be better off to not proceed with the marginal stuff, and put the significant amounts of money into things that stand out more? Or would that choke off the market too much, since we mostly end up making marginal things anyway (damn it all), leaving no one able to keep going long enough to find the good stuff? It's a hard business.
Well, this is a question that (I must admit) had not crossed my mind. Courtesy of Slate, though, we can now ask how we can make pharmaceuticals more environmentally friendly. No, not the manufacturing processes: this article's worried about the drugs that are excreted into the water supply.
It's worth keeping an eye on this issue, but I haven't been able, so far, to get very worked up about it. It's true that there have been many studies that show detectable amounts of prescription drugs in the waste water stream. The possible environmental effects mentioned in the article, though, are seen at much higher concentrations. I think that much of the attention given to this issue comes from the power of modern analytical techniques -if you look for things at parts-per-billion level (or below), you'll find them. Of course, you'll also find a huge number of naturally occurring substances that are also physiologically active: can the synthetic estrogen ligands out there really compete against the huge number of phytoestrogens? I have to wonder. To me, the sanest paragraph of the article is this one:
Developing "benign-by-design" drugs poses a series of vexing challenges. In general, the qualities that make drugs effective and stable—bioactivity and resistance to degradation—are the same ones that cause them to persist disturbingly after they've done their job. And presumably even hard-core eco-martyrs (the ones who keep the thermostat at 60 all winter and renounce air travel) would hesitate to sacrifice medical efficacy for the sake of aquatic wildlife. What's more, the molecular structures of pharmaceuticals are, in the words of Carnegie Mellon chemist Terry Collins, "exquisitely specific." Typically, you can't just tack on a feature like greenness to a drug without affecting its entire design, including important medical properties.
And even that one has its problems. That "persist disturbingly" phrase makes it sound like pharmaceuticals are like little polyethylene bags fluttering around the landscape and never wearing down. But it's worth remembering that most drugs taken by humans are metabolized on their way out of the body, and most of these metabolites don't maintain the activity of the parent compound. Other organisms have similar metabolic powers - as living creatures, we've evolved a pretty robust ability to deal with constant low levels of unknown chemicals. (Here's a good chance to point out this article by Bruce Ames and Lois Swirsky Gold on that topic as it relates to cancer; many of the same points apply here).
No one can guarantee, though, that pharmaceutical residue will always be benign. As I say, it's worth keeping an eye on the possibility. But it will indeed be hard to do something about it, for just the reasons quoted above. As it is, getting a drug molecule that hits its target, does something useful when that happens, doesn't hit a lot of other things, works in enough patients to be marketable, has blood levels sufficient for a convenient dose, doesn't cause toxic effects on the side, and can be manufactured reproducibly in bulk and formulated into a stable pill. . .well, that's enough of a challenge right there. We don't actually seem to be able to do that well enough as it stands. Making the molecules completely eco-friendly at the same time. . .
A discussion with a colleague the other day brought up a point about drug patents. When you're thinking about the chemical matter you have for your project, one of the things you have to worry about is the patent situation. Are your molecules patentable? For that to be the case, you need novelty and utility. Utility is pretty much a given in this business - you wouldn't be interested in the compounds if they didn't do something - so novelty (prior art) is what we spend time wondering about.
There's usually a way through on that, though. I mean, sure, there are all these generic claims out there in other patent applications that take everything out to the asteroid belt, but you should only get worried about the stuff that the claims are teaching toward and the compounds that have been enabled (that is, actually made). Prior art is crucial, but it's also crucial to only pay attention to what deserves attention. (We last talked about this problem here).
Then there's "freedom to operate". In that case, you're asking not "can I get a patent on this", but "can I do anything with it without infringing someone else's patents". For FTO considerations, you have to look at what IP rights other people have (or are likely to have by the time you'll be ready to go). That can get rather involved, since patents are limited in several dimensions: time (they expire eventually), space (coverage varies country by country), and in "IP space" (what territory the claims stake out). Depending on what comes up, you might decide that you're in the clear. Or you might try to invent yourself out of a tight spot, if you can do that. Or you could pay someone for rights to their IP, or trade them some of yours, if you have something to trade.
But here's where we got to talking: in the drug business, where we're patenting particular chemical matter (and the use of it for particular medical needs), it seems like freedom to operate isn't as big a deal as it is in some other areas. That's partly because it's hard to get sweeping medical claims issued, and it's hard to make them stand up if they do. There have been attempts to stake out whole modes of action ("We claim the method of treating a patient in need of lowering their XYZ levels with any inhibitor of XYZase"), but fortunately, that hasn't taken hold.
So when you're talking chemical matter, does anyone know of drugs (or programs) that have been derailed in development just because of freedom-to-operate concerns? Drug patents get challenged all the time, often by generic companies, but those are patentability issues, trying to overturn the whole filing. But what about FTO? Any examples?
So, as had been suspected, the reason that Merck's thrombin antagonist vorapaxar ran into clinical trouble was excessive bleeding. This is always the first thing to suspect when an anticoagulant has difficulty in human trials.
It's really a delicate balance, the human clotting cascade, and it's all too easy to end up on the wrong side of it. When you think about it, the whole pathway has to be under very tight regulation - I mean, here's the fluid that transports oxygen and nutrients and removes waste. Absolutely crucial to the life of every cell in the body. And here's an option to have that fluid thicken up and turn to jelly, very quickly, and once it happens it can't be reversed. No, you're going to want a lot of safeguards around that switch. But if you lean over too far the other way, well. . .there's a lot of vascular plumbing in the body, and it gets a lot of stress. Leaks and rips are inevitable. You have to have a method for patching holes, and it has to be ready to go everywhere, at all times. Dial it down just a bit too much, and hemorrhages are inevitable. Thus all the different clotting mechanism steps, and the different drugs targeting them.
As Matthew Herper explains at that link above, the prospect for this drug are completely dependent on which side of the line it ends up on. In this patient population, it's already stepped over - another result like this one, and vorapaxar could be completely sunk.
Here's a problem that I've seen at every company I've worked at, and there are good reasons to believe that it afflicts every company out there. That's because I think it's grounded in human nature: dog-and-pony-itis.
That's the phrase I use for what happens to meetings over time. Many readers will be familiar with the process: a company gradually accumulates regular meetings on its internal calendar - project team meetings, individual chemistry and biology meetings inside that, overall review meetings, resourcing, planning, interdisciplinary meetings. . .everyone who's anyone, in some companies, has to be calling a meeting of their very own.
Eventually, someone says "Enough!" and purges the schedule, replacing the tangle of overlapping meetings with A Brand New Meeting or two. These will actually discuss issues, for once, and people are encouraged to actually say what's really going on with their projects. For once. And who knows, maybe that's the case (for once) - but it doesn't last.
Because every time, in my experience, the Brand New Meeting itself starts to collect barnacles. Over time, it becomes less useful, and more of a show. The music starts up, the Pomeranian dogs start hopping around and barking, and the trained horses make their entrance from the wings. It becomes more expedient to just get up and tell people the broad strokes of a project, especially the broad strokes that are actually working, and leave the messy details out. And gradually, other meetings spring up to try to take up the slack, since nothing ever seems to get done at the Brand New. . .
The thing is, I don't know how to stop this from happening. It comes on like rust. I've lost count of the we've-got-to-get-rid-of-this-stupid-meeting initiatives I've seen over the years, and every time the cycles eventually repeats. So here's a question: has anyone broken out? And if you have, how? Suggestions welcomed in the comments. . .
He highlights the experience of the blog Retraction Watch (which I hadn't heard of until now), when they tried to find out why a paper had been pulled from the Annals of Thoracic Surgery. The journal's editor responded to their query by informing them that "it's none of your damn business".
Gotta disagree there, chief. I think that this is actually important information, and that it should be disclosed as much as possible. There are all sorts of reasons for papers to be retracted, ranging from benign to evil, and it's in the interest of readers to know what category things have fallen into. I understand that in some cases papers are the subject of ongoing investigations, so these details aren't always available, but in that case, why not say something like: "The data in Table II have not been reliably reproduced by other workers. While some of the co-authors of the original work have stated that they stand by the results as published, an investigation has begun into the methods and data of this paper, and the lead authors have asked that it be retracted until this matter is concluded".
But that's not the sort of thing we get. Goldacre cites another example from Retraction Watch, concerning this paper from JACS. When the bloggers contacted the lead author, he gave them more details than you could get from the journal about what was wrong with the paper. So why doesn't JACS tell us these things?
Thanks to the Retraction Watch people for taking the time and effort to do this sort of thing. I just wish that it weren't necessary for anyone to do it at all.
Some time ago, I took nominations for Least Useful Animal Models. There were a number of good candidates, many of them from the CNS field. A recent report makes me think that these are even stronger contenders than I thought.
The antidepressant reboxetine (not approved in the US, but sold in a number of other countries by Pfizer) was recently characterized by a German meta-analysis of the clinical data as "ineffective and potentially harmful". Its benefits versus placebo (and SSRI drugs) have been overestimated, and its potential for harm underestimated. It was approved in Europe in 1997, and provisionally by the FDA in 1999, although that was later rolled back when more studies came in that showed lack of efficacy.
Much has been made of the fact that Pfizer had not published many of the studies they conducted on the drug. These do seem, however, to have been available to regulatory authorities, and were the basis for the FDA's decision not to grant full approval. As that BMJ link discusses, though, there's often not a clear pathway, especially in the EU, for a regulatory agency to go back and re-examine a previous decision based on efficacy (as opposed to safety).
So the European regulatory agencies can be faulted for not revisiting their decision on this drug in a better (and quicker) fashion, and Pfizer can certainly be faulted for letting things stand (in the face of evidence that the drug was not effective). All this is worrisome, but these are problems that are being dealt with. Since 2007, for example, trials for the FDA have been required to be posted at clinicaltrials.gov, although the nontranparency of older data can make it hard to compare newer and older treatments in the same area.
What's not being dealt with as well is an underlying scientific problem. As this piece over at Scientific American makes plain, reboxetine, although clinically ineffective, works just fine in all the animal models:
And this is a rough moment for scientists studying depression. Why? Because reboxetine works beautifully in our animal models. It’s practically a poster-child antidepressant. It produces acute effects in tests such as forced-swim tests and tail-suspension tests (which use changes in struggle as a measure of antidepressant efficacy). It produces neurogenesis in the hippocampus, which is thought to be correlated with antidepressant effects. When behavioral pharmacologists are doing comparisons between older antidepressants and newer ones, reboxetine is often used as a positive control, a drug known to have an effect in the behavioral test of choice.
But it doesn’t work in patients. And patients are what matters. Now, scientists are stuck with a difficult question: What went wrong?
A very good question, and one without any very good answers. And this certainly isn't the first CNS drug to show animal model efficacy but do little good in people. So, how much is the state of the art advancing? Are we getting anywhere, or just doing the same old thing?
Everyone in this industry wants to have good, predictive biomarkers for human diseases. We've wanted that for a very long time, though, and in most cases, we're still waiting. [For those outside the field, a biomarker is some sort of easy-to-run test that for a factor that correlates with the course of the real disease. Viral titer for an infection or cholesterol levels for atherosclerosis are two examples. The hope is to find a simple blood test that will give you advance news of how a slow-progressing disease is responding to treatment]. Sometimes the problem is that we have markers, but that no one can quite agree on how relevant they are (and for which patients), and other times we have nothing to work with at all.
A patient's antibodies might, in theory, be a good place to look for markers in many disease states, but that's some haystack to go rooting around in. Any given person is estimated, very roughly, to produce maybe ten billion different antibodies. And in many cases, we have no idea of what ones to look for since we don't really know what abnormal molecules they've been raised to recognize. (It's a chicken-and-egg problem: if we knew what those antigens were, we'd probably just look for them directly with reagents of our own).
So if you don't have a good starting point, what to do? One approach has been to go straight into tissue samples from patients and look for unusual molecules, in the belief that these might well be associated with the disease. (You can then do just as above to try to use them as a biomarker - look for the molecules themselves, if they're easy to assay, or look for circulating antibodies that bind to them). This direct route has only become feasible in recent years, with advanced mass spec and data handling techniques, but it's still a pretty formidable challenge. (Here's a review of the field).
A new paper in Cell takes another approach. The authors figured that antigen molecules would probably look like rather weirdly modified peptides, so they generated a library of several thousand weirdo "peptoids". (These are basically poly-glycines with anomalous N-substituents). They put these together as a microarray and used them as probes against serum from animal models of disease.
Rather surprisingly, the idea seems to have worked. In a rodent model of multiple sclerosis (the EAE, or experimental autoimmune encephalitis model), they found several peptoids that pulled down antibodies from the model animals and not from the controls. A time course showed that these antibodies came on at just the speed expected for an immune response in the animal model. As a control, another set of mice were immunized with a different (non-disease-causing) protein, and a different set of peptoids pulled down those resulting antibodies, with little or no cross-reactivity.
Finally, the authors turned to a real-world case: Alzheimer's disease. They tried out their array on serum from six Alzheimer's patients, versus six age-matched controls, and six Parkinson's patients as another control, and found three peptoids that seems to have about a 3-fold window for antibodies in the AD group. Further experimentation (passing serum repeated over these peptoids before assaying) showed that two of them seem to react with the same antibody, while one of them has a completely different partner. These experiments also showed that they are indeed pulling down the same antibodies in each of the patients, which is an important thing to make sure of.
Using those three peptoids by themselves, they tried a further 16 AD patient samples, 16 negative controls, and 6 samples from patients with lupus, all blinded, and did pretty well: the lupus patients were clearly distinguished as weak binders, the AD patients all showed strong binding, and 14 out of the 16 control patients showed weak binding. Two of the controls, though, showed raised levels of antibody detection, up to the lowest of the AD patients.
So while this isn't good enough for a diagnostic yet, for a blind shot into the wild blue immunological yonder, it's pretty impressive. Although. . .there's always the possibility that this is already good enough, and that the test picked up presymptomatic Alzheimer's in those two control patients. I suppose we're going to have to wait to find that out. As you'd imagine, the authors are extending these studies to wider patient populations, trying to make the assay easier to run, and trying to find out what native antigens these antibodies might be recognizing. I wish them luck, and I hope that it turns out that the technique can be applied to other diseases as well. This should keep a lot of people usefully occupied for quite some time!
Very bad news today for Merck (and the Schering-Plough people therein). Their thrombin receptor antagonist vorapaxar (formerly SCH 530348) has run into trouble.
A review board monitoring the compound's clinical trials has suddenly halted two of them. All we know at the moment is that the drug is "not appropriate for stroke patients", and it's also being pulled from a study in people who have had mild heart attacks. The best guess, as with any drug in the clotting field, is that it may be causing bleeding instead, but we'll have to see. Problem is, those are two of the more important patient populations that a company would be targeting, and if there's trouble in those groups, then it could be waiting to show up in others as well.
Vorapaxar has an unusual history at Schering-Plough (I wrote about it here, with some personal experiences from my own time at the company thrown in). I'm very sorry to see this news - sorry for the patients involved (and those who won't be helped later on), for the researchers involved (several of whom I've worked with in the past), and for Merck's investors, who are taking about a 6% trim today on the NYSE.
This compound wasn't the whole reason for Merck to buy Schering-Plough, but it wasn't a small part of the deal, either. That other stuff had better work out. . .
You'll have noticed that we haven't been hearing a lot about Sanofi-Aventis trying to round up Genzyme shareholders as part of their takeover plan. That's because the two companies seem to have found a way to negotiate with each other, so it doesn't look like we're going to go into full proxy-fight mode. This Bloomberg article gives the impression that a number of issues have been worked out, and that there are just a few figures left to agree on. It's quite possible that Genzyme's executives and board weren't able to find anyone else who agreed with their public assessment of what their takeover price should be, realized that they were probably going to be stuck with this deal, and decided to make the best of it.
So how does that leave that big bet in the options market from last summer? Well, selling October 75 calls worked out just fine; GENZ never made it over that price, so whoever bought the things on the other side of those contracts ended up handing over all their money to the people who wrote them. But using the proceeds to set up a 65-55 put spread for this month, that doesn't look like it's going to work so well. Genzyme's price has hung in the low 70s the whole time, and doesn't look to make the below-65-but-not-below-55 range that those trades need. Oh, well - let that be a lesson to everyone to stay out of the options market unless you're hedging a position somewhere else. I hope that these folks were.
In case anyone's wondering, the pace of discovery has slowed a bit around these parts today. We've got a whalloping pile of snow out there, verging on about half a meter, and I'm blogging from my fortified position at home. No French onion soup today, although I do have a couple of home-made pizzas in the oven as I speak.
Science should be resuming tomorrow, though, thanks to the snow plows. I find, having grown up in a part of the country where there were none (and where everything just shut down a couple of times per winter), that I'm still impressed at the efforts that go into cleaning the roads. My thinking, though, is that people who grow up under these conditions take road-clearing as some sort of natural process - of course the highways will be clear; they always are. A good look at a local street that's had traffic and a foot of unplowed snow on it for a week would be quite a revelation.
Update: for those who've asked, my company was officially closed today, which I very much appreciated, even though I do take the train in to work. Driving in to work, now that would have have been a real treat. . .
Now, this is a pretty neat trick. One of the things that drug development people have to worry about a lot is the crystal forms of the new compound. You might imagine (if you haven't had to do this stuff) that if a compound is crystalline, then that's that - you've got the solid form now, and full speed ahead.
But many substances can crystallize in all sorts of forms - here's one with at least seven different solved crystal structures (and it has more that haven't yielded an X-ray structure yet). By the time you bring in solvates, where the molecule crystallizes along with the solvent it was last in, or with water dragged in from the air, or what have you, you can go well up into the double digits, and we haven't even begun talking about salt forms yet. Each one of those starts the whole counter running all over again. These polymorphs have different melting points, different rates of dissolution, and different behavior when they hit the stomach, and these are all things that you have to worry about.
There have been several real holdups in the drug industry, where a compound that had been developed as one form suddenly decided that it would rather be another one when the chemistry was scaled up. That blows out all the blood levels and dosing protocols that were worked out before. Sometimes the new form can be used, once all the data are re-acquired, but sometimes it turns out to be unusably worse than the old form. The challenge then is: how do you get it to be one rather than the other? And how can you be sure that it'll happen every time?
So we're always interested in ways to make molecules take on different crystal forms, and in ways to make them switch from one to another. That's where this latest paper comes in. They've found that you can expose solvated crystals to pressurized carbon dioxide gas and alter the crystalline forms. The gas molecules work their way into the crystal lattice, displace the solvate molecules, and then when the pressure is taken off, they work their way back out again (or can be persuaded to with a little heat). It's an ingenious idea, and you can bet that development scientists all over the industry have saved copies of this paper already. We need all the help we can get!
Angewandte Chemie recently ran a behind-the-scenes article about their journal, with several interesting bits of information. For one thing, they've gotten a lot more selective over the years, as the number of submissions has gone up. They publish many more papers, total, than they used to, but reject a much higher fraction at the same time. (I've added to that total myself a couple of times!).
Mind you, there are times when that rejection rate should have been even a bit higher, but as you might guess, the article doesn't bring up those awkward moments. There's no insight into the vile puns and other pop-culture references that continue to infest their abstracts, either. Can't have everything.
But I found this chart interesting. These are the download statistics for a particular (unspecified) communication in the journal over time. (Note that they've scrubbed the units on the Y-axis, the wimps).
This confirms what most scientists have figured, that your paper has a brief window to be noticed, and then back in the pile it goes. Back to the background rate, with people coming across it in literature searches once in a while.
How's the XMRV / chronic fatigue syndrome connection holding up? Not real well. Science has a roundup of the latest news in the area, and none of it looks encouraging. There are four studies that have come out in the journal Retrovirology that strongly suggest that earlier positive test results for the virus in CFS samples are just artifacts.
For one thing, when you look closely, it turns out that the sequences from cell-cultured XMRV samples are quite a bit more diverse than the ones taken from widely separated patients at different times. And that's just not right for an infectious agent; it's the opposite of what you should see. A number of supposedly XMRV-specific primers that have been used in such assays also appear to amplify other murine viral sequences as well, and samples that show positive for XMRV also appear to have some mouse DNA in them. Finally, there's reason to believe that some common sources of PCR reagents may have murine viral contaminants that blow up this particular assay.
Taken together, these latest results really have to make you cautious in assigning any role at all to XMRV based on the published data. You can't be sure that any of the numbers are what they're supposed to be, and the most parsimonious explanation is that the whole thing has been a mistake. To illustrate the state of things, you may remember an effort to have several labs (on both sides of the issue) test the same set of samples. Well, according to Science. . .
Some had hoped that a project in which several U.S. labs are testing for XMRV in the same samples would clear up the picture. But so far this effort has been inconclusive. Four CFS patients' blood initially tested positive for XMRV at WPI and the U.S. Centers for Disease Control and Prevention but not at an NCI lab. When all three labs tested new samples from the same patients, none found XMRV—for reasons that aren't yet clear, says Coffin. The group now plans to test blood from several dozen CFS patients and controls.
No, this isn't looking good at all. It's pretty typical, though, of how things are out at the frontiers in this business. There are always more variables than you think, and more reasons to be wrong than you've counted. A theory doesn't hold up until everyone who wants to has had a chance to take some big piñata-shattering swings at it, with weapons of their choice. So, to people outside of research: you're not seeing evidence of bad faith, conspiracy, or stupidity here. You're seeing exactly how science gets done. It isn't pretty, but it gets results in the end. Circumspice.
I truly don't know what to make of this one. Virologist Luc Montagnier has announced that he's heading off to Shanghai, to found an institute and investigate. . .mysterious electromagnetic signals from extremely diluted pathogens.
What we have found is that DNA produces structural changes in water, which persist at very high dilutions, and which lead to resonant electromagnetic signals that we can measure. Not all DNA produces signals that we can detect with our device. The high-intensity signals come from bacterial and viral DNA. . .
. . .I can't say that homeopathy is right in everything. What I can say now is that the high dilutions are right. High dilutions of something are not nothing. They are water structures which mimic the original molecules. We find that with DNA, we cannot work at the extremely high dilutions used in homeopathy; we cannot go further than a 10 to the minus 18th dilution, or we lose the signal. But even at 10 to the minus 18th, you can calculate that there is not a single molecule of DNA left. And yet we detect a signal. . .
Well, Montagnier believes that he's chasing something real, and all I can do is wish him luck as he tries to chase it down. I'd be extremely interested to see something reproducible come out of such ideas, not least because it would open up whole new areas of science. But at the same time, I'm not going to hold my breath waiting on success.
That's because this whole homeopathy/high dilution/water signature business isn't just another wild new idea that might or might not pan out. Even if it were that, this would be tricky stuff - any of the edge-of-detection phenomena are. But this area is a known swamp full of quicksand (and inhabited by various strange swamp creatures) which has claimed careers before. There are huge sunken deposits of quackery and self-delusion to be found out there, and before you announce you're digging up something valuable, you'll have to be very sure that you're not just dedging up more of the same swampy stuff.
Montagnier, as a famous researcher past retirement age in his own country, might be (from one perspective) just the sort of person who can investigate such things. But there have been a lot of eccentric dead ends pursued by famous researchers past retirement age, too. Bring us back some numbers, I say, and some reproducible experiments. Then we'll have some serious talks indeed.
Blog housekeeping note - I'm provisionally assigning this to the "Snake Oil" category, since many other discussions of this sort of thing can be found there.
Thanks to Jim Edwards at Bnet, we have an example of some of the worst pharma sales techniques imaginable. A lawsuit alleging that Gilead Pharmaceuticals had been illegally pushing off-label indications for their angina medication Ranexa (ranolizine) was dropped recently, which brought a lot of court papers into view. According to the whistle-blower who filed the suit, a director of sales force training at the company said that their mission to, and nothing will do except a direct quote, "sell gobs of dope" and "get those pills into people's mouths any way you can."
The drug's supposed to be used only for refractory angina, but the suit alleges that Gilead's sales people were targeting the larger cardiovascular market. "I do not care what you do to sell the drug," a sales manager is quoted as saying. "I don't see anything and I don't hear anything. Just get those scripts."
Now, I realize that these are papers from only one side of this case. And as it turns out the Department of Justice actually did not get involved in the suit, which is probably why it's been dropped (for now). Furthermore, even if these allegations are true, they may well reflect the culture of CV Therapeutics, the company selling Renexa when Gilead bought them in 2009.
But this should really be an alarm bell for the management at Gilead. If they have people in their sales organization with this worldview, then it's only a matter of time before some of them do enough, say enough, present enough, and write down enough evidence to allow a successful whistle-blower case. And if that's what's going on, then such a suit would be richly deserved. This sort of stuff is idiotic, and it's wrong, and it's a big reason why the public opinion of our industry has been relentlessly sliding down over the years.
I wrote here about a Wall Street Journal article covering illegal street-drug labs in Europe. Well, maybe that should be not-quite-illegal, because the people involved were deliberately making compounds that the law hadn't caught up with yet.
The article mentioned David Nichols at Purdue as someone whose published work on CNS compounds had been followed/ripped off/repurposed by the street drug folks. Now Nature News has a follow-up piece by him, and he's not happy at all with the way things have been turning out:
We never test the safety of the molecules we study, because that is not a concern for us. So it really disturbs me that 'laboratory-adept European entrepreneurs' and their ilk appear to have so little regard for human safety and human life that the scant information we publish is used by them to push ahead and market a product designed for human consumption. Although the testing procedure for 'safety' that these people use apparently determines only whether the substance will immediately kill them, there are many different types of toxicity, not all of which are readily detectable. For example, what if a substance that seems innocuous is marketed and becomes wildly popular on the dance scene, but then millions of users develop an unusual type of kidney damage that proves irreversible and difficult to treat, or even life-threatening or fatal? That would be a disaster of immense proportions. This question, which was never part of my research focus, now haunts me.
Well, that's absolutely right, and it's not terribly implausible, either. The MPTP story is as good an example as you could want of what happens when you just dose whoever shows up on the street corner with that cool stuff you made in your basement lab. All we need is a side effect like that, which comes on a bit more slowly, and there you'd have it. That's one of the reasons I have such disgust for the people who are making and selling these things - they show a horrifying and stupid disregard for human life, all for the purpose of making a few bucks.
At the same time, I think that Nichols himself should try not to blame himself. His article comes across rather anguished; I have a lot of sympathy for him. But the actions of other people, especially scum, are outside of his control, and I think he's taking every reasonable precaution on his end while he does some valuable work.
Homo homini lupus: the sorts of people who see basement drugs as a fun business opportunity would likely be doing something equally stupid and destructive otherwise. Dr. Nichols, you have nothing to be ashamed of, nothing to apologize for - and, honestly, nothing to keep you up at night. You're the responsible member of the human race in this story.
Whining PhD students are nothing new, but there seem to be genuine problems with the system that produces research doctorates (the practical “professional doctorates” in fields such as law, business and medicine have a more obvious value). There is an oversupply of PhDs. Although a doctorate is designed as training for a job in academia, the number of PhD positions is unrelated to the number of job openings. Meanwhile, business leaders complain about shortages of high-level skills, suggesting PhDs are not teaching the right things. The fiercest critics compare research doctorates to Ponzi or pyramid schemes.
One thing for those of us in the sciences to keep in mind is that we still have it better than people studying the humanities. Industrial jobs are in short supply right now, that's for sure - but at least the concept of "industrial job" is a valid one. What happens when you take a degree whose main use is teaching other people who are taking degrees?
roponents of the PhD argue that it is worthwhile even if it does not lead to permanent academic employment. Not every student embarks on a PhD wanting a university career and many move successfully into private-sector jobs in, for instance, industrial research. That is true; but drop-out rates suggest that many students become dispirited. In America only 57% of doctoral students will have a PhD ten years after their first date of enrolment. In the humanities, where most students pay for their own PhDs, the figure is 49%. Worse still, whereas in other subject areas students tend to jump ship in the early years, in the humanities they cling like limpets before eventually falling off.
(See this post for more on that topic. And this inevitably leads to the should-you-get-a-doctorate-at-all discussion, on which more can be found here and here). In the end, what we seem to have is a misalignment of interests and incentives:
Academics tend to regard asking whether a PhD is worthwhile as analogous to wondering whether there is too much art or culture in the world. They believe that knowledge spills from universities into society, making it more productive and healthier. That may well be true; but doing a PhD may still be a bad choice for an individual.
The interests of academics and universities on the one hand and PhD students on the other are not well aligned. The more bright students stay at universities, the better it is for academics. Postgraduate students bring in grants and beef up their supervisors’ publication records. Academics pick bright undergraduate students and groom them as potential graduate students. It isn’t in their interests to turn the smart kids away, at least at the beginning. . .
And I'm not sure how to fix that. Talk of a "higher education bubble" may not be idle chatter. . .
What do you have when a fire starts at a large chemical packing company, handling all sorts of oils, paints, coatings, and various industrial chemicals? Where they have hundreds of thousand-liter containers stored, surrounded by all the crates and packing material used to trans-ship them? You have this, at Chemie-Pack in the Netherlands yesterday:
And you have a black cloud that stretched across a significant part of the whole country:
Images are from Nufoto.nl, taken by people at the scene. A reader who lives 20km downwind writes me that he's been getting a pervasive smell of burnt plastic (which, he says, certainly makes a change). His main reason to be grateful is that this didn't spread to the Shell site nearby, which would have prompted an instant vacation to Germany. And then there are all the refineries 15 km to the west - if those ever go up, he tells me, "it'll look like the ending of Gulf War 1 - lowlands-style - with cows for camels".
The 1998 paper that linked MMR vaccination with autism has had a long way to fall. It made, of course, a huge media sensation, and energized the whole vaccination/autism controversy that still (in spite of evidence) goes on. But it didn't look very robust from the start, scientifically. And over the years it's gone from "Really needs shoring up" to "hasn't been reproduced" to "looks like there's something wrong with it" to "main conclusions retracted" to the final, lowest level: outright fraud.
Here's a good history of the whole affair in the BMJ. And here's the first part of a series of articles by Brian Deer, the journalist who dug into the study and found how fraudulent it really was. Not one of the 12 cases in Wakefield's original study hold up; the data were manipulated in every single one to make it fit his hypothesis. His hypothesis that he was getting grant money for. His hypothesis that he was already planning lawsuits around, before the study even started.
His hypothesis, I might add, that has led to completely unnecessary suffering among the unvaccinated children this scare has produced over the years, and has diverted enormous amounts of time, energy, and money away from useful study of autism. This sort of deliberate action is really hard to contemplate, as a reasonable human being - it's like some sort of massive campaign to persuade people to throw bricks through the windows of ambulances.
In a better world, we'd be getting expressions of sorrow and contrition from all the celebrities and others who've profited from this business. But that's not going to happen, is it?
Here's a business idea for a nonprofit drug company, sent along by reader and entrepreneur Matt Grosso. I don't necessarily think that it would work (see below), but it's worth talking about, since some of its features are worthwhile. Others, though, illustrate what may be some common misperceptions of how drug development works. Here's the key feature:
The idea here is to create a non-profit which would accept contributions for testing and bringing to market specific drugs. . .Members would vote with their contribution dollars for specific drugs. Paid staff would curate a wiki that supported periodic comparisons between various candidates approaching readiness for a specific market, which would ensure that that member votes had the benefit of the best available information and expert opinion.
This could create an alternate route for drug startups focused on particular compounds to get their product to market.
I think that the ability to specifically take in contributions is a good one - people and organizations are more likely to fund defined aims that they agree with. One big problem, though, is that there's a limit to which we can define such things in this business. And that might make the whole idea break down.
To be honest, if a nonprofit really took in contributions for the development of specific drugs, they'd run a great risk of disappointing and enraging their donation base. That's because the honking huge majority of specific drugs in development never make it. The success rates in the clinic are pretty well known: roughly 90% of everything that goes into clinical trials never makes it to market. That's a hard sell for contributors! And if you moved the point at which you asked for donations back into the preclinical stage, the situation would get much, much worse. At the "Hey, we just thought of a neat new target" step, you'd be offering your contributors worse odds and payoffs than they could get in the state lottery.
For new compounds and new modes of action, the risks decrease in roughly the following order. At the same time, the time it takes to get an answer increases in the roughly the same way:
1. Specific single compound with a defined mechanism. Hold your breath, and good luck! 2. Defined chemical class of compounds targeting the same mechanism. Now you've got some fallback, although it might not be enough to help in case of trouble. 3. Specific mechanism, with several chemical series. This gives you several shots, although if your mechanism of action is off, all will still be in vain. 4. Phenotypic readout with a range of compounds (that is, they seem to do the right thing, but you're not sure how). Risk varies according to how realistic your assays are, and how many different compounds you've picked up. 5. Targeting a broad class of related mechanisms - for example, "reduce LDL", "disrupt bacterial membranes", "interrupt inflammatory cascade". Note that we're now getting farther and farther away from individual compounds. 6. Targeting one specific therapeutic area: antivirals, Alzheimer's, osteoporosis, etc. 7. Trying to balance things out with several therapeutic areas, with projects in each one at varying levels of risk.
Note that we've also illustrated the progression from "wing and a prayer startup" to "fully integrated drug company". That follows exactly from the levels of risk involved, which correlates with the amount of money on the table as well, in exactly the way the ranking of poker hands correlates with how likely they are to occur. Note also that even in that final stage, we apparently still have not mitigated the risks enough, given our cost structure. (Look at the state of the industry).
To get back to the nonprofit idea, another thing that might work out less well in practice than it does in principle might be that wiki for the potential investors/donors. This is what companies try to do internally: comparing their programs by the same criteria, head to head, then determining how to resource them. 'Taint easy. I don't know of any organization that truly thinks that they do as well at this as they should. Even a bunch of perfectly clear-headed and honest assessments (which, by the way, cannot be universally assumed) are still complicated by unquantifiable risks. I think that people might be alarmed by the number of times you just have to push things ahead to see what's going to happen.
Even after all these qualifications, though, I think that there's merit in the idea of breaking out individual drug development programs. I've long kicked around the idea of whether a company could fund programs by essentially selling shares in its various clinical candidates, with a cut of the profits coming if things work out. It would be an accounting mess, and everyone would have to keep those failure rates in mind, but there are still people who'd be willing to take a crack at it, for a given level of possible return. Those donors/investors might even be less put out than the charitable/nonprofit ones - everyone's had investments go bad, but no one wants to feel like their charitable donation was wasted. Thoughts?
This story on a new diagnostic method in oncology is getting a lot of attention in the press. It's a collaboration between J&J, a small company they've bought called Veridex, and several oncology centers to see if very sensitive monitoring of circulating tumor cells could be a more useful biomarker.
The press coverage has some hype in it - for one thing, all the stuff about detecting one single cancer cell in the whole body isn't too helpful. The cells have to be circulating in the blood, and they have to display the markers you're looking for, to start with. But I can't deny that this is an interesting and potentially exciting field. There's some evidence to suggest that circulating tumor cells could be a strongly predictive marker can in several kinds of cancer.
These studies are looking at the sorts of endpoints that clinicians (and patients, and the FDA) all respect: overall survival, and progression-free survival. As discussed around here before, it's widely felt in oncology that these are where the field should really be spending its time, rather than on tumor size and so on. (You'd think that tumor size or number of detectable tumors would correlate with survival, but in many cases it's a strikingly poor predictor - which is a shame, since those are easier and faster numbers to get). A blood test, on the other hand, that strongly correlates with survival would be a real advance.
The value would not just be in telling (some) patients that they're showing better chances for survival, although I'm sure that'll be greatly appreciated. It's the patients whose numbers come back worse that may well be helped out the most, because that indicates that the current therapy isn't doing the job, and that it's time to switch to something else (assuming that there is something else, of course). The more quickly and confidently you can make that call, the better.
And from a drug development perspective, the uses of such assays in clinical trials are immediately obvious. Additionally, I'd think that these would be a real help to rolling-enrollment Bayesian trial designs, since you could assign patients to (and move them between) the different study groups with more confidence.
The Veridex/J&J assay (called CellSearch) uses an ingenious magnetic immunochemical approach. Blood samples are treated with antibody-coated iron nanoparticles that recognize a common adhesion protein. The cells that get bound are separated magnetically on a diagnostic chip for further immunostaining and imaging. There are other techniques out there as well - here's an article from Technology Review on a competing one that's said to be more sensitive, and here's a San Diego company trying to enter the market with an assay that's supposed to be broader-based). The key for all of these things will be bringing the costs down (and the speed of production up, in some cases). These are tests that ideally would be run early and often, so the cheaper and faster the assay can be made, the better.
Now, of course, we just need some more therapies that work, so that when people find out that their current regimen isn't working, then they have something else to try. If these circulating-cell assays help us sort things out faster in the clinic, maybe we'll be able to make better use of our time and money to that end.
2010 wasn't, though, a particularly good year for getting new drugs on the market. But it wasn't an outstandingly bad one, either. The 21 approvals last year are lower than the previous two years (25 and 24), but still better than 2007's 18. It's actually right in the recent range, with the weirdo exception of 2004, which broke into the low 30s for no particular reason that I can see.
Of course, this level of drug development (and the sorts of drugs that we're able to get through) doesn't seem to be enough, considering all the cost-cutting and job-shedding that's been going on. So staying in the range of the past ten years, while it certainly could be worse, is still nothing to celebrate. . .
So, let's get things underway around here: 2010 was, as has been the rule, Not A Good Year for the drug industry. But overall, I think it did break the pattern that had been going since about 2006, of each year being worse than the one before. That's just an impression, mind you, but perhaps some sort of bottom has been reached?
We'll find out. My guess is that 2011 will end up looking more like the prelude to 2012. We have a number of patent expirations coming up (with Lipitor, late this year, as the marquee event), but they'll probably affect next year's earning's more than this year. (Note that if you're a research-driven drug company, these things are bad news, but if you're a generic company (or a drug store chain), the picture is much rosier.
Predictions for this year can be entered in the comments section. Which company looks to have the best time of it, and which the worst?