About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
March 31, 2010
Here's an interesting list (PDF) of patent activity for life-science firms in the New England area. What I can't quite work out is how the numbers were generated. For example, it's very strange that Pfizer shows up with 3 patents from last year. A quick look through the databases shows more issued patents than that, although many of the ones I'm seeing are probably continuations-in-part of older parents.
But at any rate, if there's any consistent method of evaluating patents that shows Pfizer with fewer patents over the last few years than the likes of Neurogen and Nitromed - and that's what this one shows - then something's quite odd. I've emailed the people at MassHighTech.com to ask what's up.
Update: an email from the magazine says that the patent count is supposed to represent just the ones from the New England area. That does clear out a lot of Pfizer ones which originated from Sandwich, St. Louis, and other exotic ports. But I still can't get the numbers to come out right. I looked through about a quarter of the 2009 US patents assigned to Pfizer, and found five or six out of Groton/New London just in that group. I've emailed the magazine again about this. . .
+ TrackBacks (0) | Category: Patents and IP
Well, now, this is a disappointment. In a new Angewandte Chemie paper, a French team reports synthesizing trinitropyrazole. And it's. . .well, it's well-behaved. Surprisingly insensitive. Not that touchy. Might actually be useful as a storable high-energy material that could actually be handled.
The fools! Don't they realize that Angewandte is the place to unload the barely-in-our-plane-of-existence compounds, the sweat-starting, nostril-flaring "How could it blow up? It's in liquid nitrogen!" stuff? Surely there's a better home for things with actual utility, the Journal of Not So Horrible Once You've Made Them, Really, or "That Wasn't So Bad Now, Was It" Communications. Sheesh.
+ TrackBacks (0) | Category: The Scientific Literature
March 30, 2010
While we're on the subject of patents, PatentBaristas has a good summing-up of the Ariad decision I mentioned here last week. There is indeed a written description requirement for a patent, and it's separate from enablement, and it had better be good.
+ TrackBacks (0) | Category: Patents and IP
I haven't commented so far on the decision yesterday in the Myriad Genetics case involving their breast cancer assay gene patents. This is surely going to be appealed, and we're not going to really know what's up here until the CAFC has a say. And who knows? This is the sort of case that might go even further than that.
That's what the folks at Patently Obvious think, at any rate. They note that this decision is rather far out of the usual range of case law on patentability, and will likely be reversed on appeal. And then?
+ TrackBacks (0) | Category: Patents and IP
A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you're wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.
I hadn't seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn't symmetrical - and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn't appear to be just due to less-precise studies tending to show positive effects for some reason - it strongly suggests that there are negative studies that just haven't been reported.
The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.
The loss of experiments that showed no effect shouldn't surprise anyone - after all, it's long been known that publishing such papers is just plain harder than publishing ones that show something happening. There's an obvious industry bias toward only showing positive data, but there's an academic one, too, which affects basic research results. As the authors put it:
These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.
And there's an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):
The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.
This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it's clear that this can't happen if the results are just sitting on someone's hard drive. But it's also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing's for free.
+ TrackBacks (0) | Category: Animal Testing | Cardiovascular Disease | Clinical Trials | Drug Assays | The Scientific Literature
Another promising Phase II oncology idea goes into the trench in Phase III: GenVec has been working on a gene-therapy approach ("TNFerade") to induce TNF-alpha expression in tumors. That's not a crazy idea, by any means, although (as with all attempts at gene therapy) getting it to work is extremely tricky.
And so it has proved in this case. It's been a long, hard process finding that out, too. Over the years, the company has looked at TNFerade for metastatic melanoma, soft tissue sarcoma, and other cancers. They announced positive data back in 2001, and had some more encouraging news on pancreatic cancer in 2006 (here's the ASCO abstract on that one). But last night, the company announced that an interim review of the Phase III trial data showed that the therapy was not going to make any endpoint, and the trial was discontinued. Reports are that TNFerade is being abandoned entirely.
This is bad news, of course. I'd very much like gene therapy to turn into a workable mode of treatment, and I'd very much like for people with advanced pancreatic cancer to have something to turn to. (It's truly one of the worst diagnoses in oncology, with a five-year survival rate of around 5%). A lot of new therapeutic ideas have come up short against this disease, and as of yesterday, we can add another one to the list. And we can add another Promising in Phase II / Nothing in Phase III drug to the list, too, the second one this week. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials
March 29, 2010
We get reminded again and again that interesting Phase II results are only that: interesting, and no guarantee of anything. Antisoma (and their partner Novartis) are the latest company to illustrate that painful reality - their drug AS404 (vadimezan) looked in Phase II as if it might be a useful addition to oncology treatments, but has completely missed its endpoints in the bigger, more realistic world of Phase III. The trial was halted after an interim analysis showed basically no hope of it showing benefit if things continued.
There are many reasons for why these things happen. Phase II trials are typically smaller, and their patient populations are more carefully selected. And they're quite susceptible to wishful thinking. They're designed to keep things going, to show some reason to proceed, and they often do. If your drug candidate makes it through Phase II, that may say more about how you designed the trial than it says about the compound.
That's not to say that getting past Phase II is meaningless. Compared to having no efficacy data at all, it's a big step. But Phase III, when a compound goes out to a larger and more diverse patient population, is a much bigger one. And plenty of candidates aren't up to it.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
For the medicinal chemists in the audience, I wanted to strongly recommend a new paper from a group at Roche. It's a tour through the various sorts of interactions between proteins and ligands, with copious examples, and it's a very sensible look at the subject. It covers a number of topics that have been discussed here (and throughout the literature in recent years), and looks to be an excellent one-stop reference.
In fact, read the right way, it's a testament to how tricky medicinal chemistry is. Some of the topics are hydrogen bonds (and why they can be excellent keys to binding or, alternatively, of no use whatsoever), water molecules bound to proteins (and why disturbing them can account for large amounts of binding energy, or, alternatively, kill your compound's chances of ever binding at all), halogen bonds (which really do exist, although not everyone realizes that), interactions with aryl rings (some of which can be just as beneficial coming in 90 degrees to where you might imagine), and so on.
And this is just to get compounds to bind to their targets, which is the absolute first step on the road to a drug. Then you can start worrying about how to have your compounds not bind to things you don't want (many of which you probably don't even realize even are out there). And about how to get it to decent blood levels, for a decent amount of time, and into the right compartments of the body. And at that point, it's nearly time to see if it does any good for the disease you're trying to target!
+ TrackBacks (0) | Category: Drug Assays | In Silico | Life in the Drug Labs
March 26, 2010
As we slowly attack the major causes of disease, and necessarily pick the low-lying fruit in doing so, it can get harder and harder to see the effects of the latest advances. Nowhere, I'd say, is that more true than for cardiovascular disease, which is now arguably the most well-served therapeutic area of them all. It's not that there aren't things to do (or do better) - it's that showing the benefit of them is no easy task.
Robert Fortner has a good overview of the problem here. The size of the trials needed in this area is daunting, but they have to be that size to show the incremental improvements that we're down to now. He also talks about oncology, but that one's a bit of a different situation, to my mind. There's plenty of room to show a dramatic effect in a lot of oncology trials, it's just that we don't know how to cause one. In cardiovascular, on the other hand, the space in which to show something amazing has flat-out decreased. This is a feature, by the way, not a bug. . .
+ TrackBacks (0) | Category: Cancer | Cardiovascular Disease | Clinical Trials | Drug Industry History
Technical book author (and occasional commenter here) Robert Bruce Thompson has a channel on YouTube called "The Home Scientist" that's quite interesting. Many of these seem to be companion videos for his book, The Illustrated Guide to Home Chemistry Experiments. This is real, well-done chemistry with reagents that can be easily purchased and manipulated by a competent non-chemist. Well worth sending on to people who would like to get a feel for what the science is like!
+ TrackBacks (0) | Category: General Scientific News
The discussion of "privileged scaffolds" in drugs here the other day got me to thinking. A colleague of mine mentioned that there may well be structures that don't hit nearly as often as you'd think. The example that came to his mind was homopiperazine, and he might have a point; I've never had much luck with those myself. That's not much of a data set, though, so I wanted to throw the question out for discussion.
We'll have to be careful to account for Commercial Availability Bias (which at least for homopiperazines has decreased over the years) and Synthetic Tractability Bias. Some structures don't show up much because they just don't get made much. And we'll also have to be sure that we're talking about the same things: benzo-fused homopiperazines (and other fused seven-membered rings) hit like crazy, as opposed to the monocyclic ones, which seem to be lower down the scale, somehow.
It's not implausible that there should be underprivileged scaffolds. The variety of binding sites is large, but not infinite, and I'm sure that it follows a power-law distribution like so many other things. The usual tricks (donor-acceptor pairs spaced about so wide apart, pi-stacking sandwiches, salt bridges) surely account for much more than their random share of the total amount of binding stabilization out there in the biosphere. And some structures are going to match up with those motifs better than others.
So, any nominations? Have any of you had structural types that seem as if they should be good, but always underperform?
+ TrackBacks (0) | Category: Drug Assays | Drug Industry History | Life in the Drug Labs
March 25, 2010
. . .now's a heck of a time to buy. I just noticed this in my e-mail - thanks to Pfizer, and their appetite for closing down research buildings, you now have the opportunity to buy massive piles of once-useful instruments at auction.
Don't let the fact that a massive drug company has no need for all this equipment put you off. You might be able to find a use for it! It's good stuff: high-field NMRs, LC/mass spec machines of all sorts, liquid handlers, robotics platforms, cell culture apparatus, spectrophotometers, microscopes, centrifuges. . .the list is a long one. Removal of evil spirits is, as far as I can tell, not included. But otherwise, there's everything you'd need to start a productive research company, except the employees. And there are plenty of those on the market, too, you know.
+ TrackBacks (0) | Category: Business and Markets
Nature has a review of a new book on the anti-aging field, Eternity Soup by Greg Critser, and I found this part very instructive. The same things apply to several other therapeutic areas where people see fast money to be made:
Critser's methodical portrayal of a host of anti-ageing practitioners reveals some fascinating people who seek to convince others that they can purchase longer and healthier lives like any other commodity. He makes clear that many anti-ageing treatments are based more on faith healing than on science, and that the industry defends them and presents them to the public with evangelical zeal. Scientific gerontologists who point out the lack of empirical evidence behind the claims are shouted down, sued for libel or made fun of as lab technicians or statisticians with no experience in treating patients.
Critser became aware during his research of why the ridiculed scientific gerontologists find the anti-ageing industry so aggravating. The industry closely monitors the field for any advances, and when it spots something that might be turned into a commercial enterprise, the product is repackaged, branded and sold to the public as the next great breakthrough of its own invention. . .
It's interesting, though, that the cancer-cure quacks tend not to ride so much on the current research. A lot of that stuff seems just to be completely made up, without even a connection to something in the scientific literature. Perhaps that's because there are occasional spontaneous remissions from cancer, but none from old age. . .
+ TrackBacks (0) | Category: Aging and Lifespan | Cancer | Snake Oil
In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.
Nature has a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.
In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.
A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.
And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).
It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.
But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials | Pharmacokinetics
March 24, 2010
Here's a new article on the concept of "privileged scaffolds", the longstanding idea that there seem to be more biologically active compounds built around some structures than others. This doesn't look like it tells me anything I didn't know, but it's a useful compendium of such structures if you're looking for one. Overall, though, I'm unsure of how far to push this idea.
On the one hand, it's certainly true that some structural motifs seem to match up with binding sites more than others (often, I'd say, because of some sort of donor-acceptor pair motif that tends to find a home inside protein binding sites). But in other cases, I think that the appearance of what looks like a hot scaffold is just an artifact of everyone ripping off something that worked - others might have served just as well, but people ran with what had been shown to work. And then there are other cases, where I think that the so-called privileged structure should be avoided for everyone's good: our old friend rhodanine makes an appearance in this latest paper, for example. Recall this this one has been referred to as "polluting the literature", with which judgment I agree.
+ TrackBacks (0) | Category: Drug Assays | Drug Industry History
I've spoken about fragment-based drug design and ligand efficiency here a few times. There's a new paper in J. Med. Chem. that puts some numbers on that latter concept. (Full disclosure - I've worked with its author, although I had nothing to do with this particular paper).
For the non-chemists in the crowd who want to know what I'm talking about, fragment-based methods are an attempt to start with smaller, weaker-binding chemical structures than we usually work with. But if you look at how much affinity you're getting for the size of the molecules, you find that some of these seemingly weaker compounds are actually doing a great job for their size. Starting from these and building out, with an eye along the way toward keeping that efficiency up, could be a way of making better final compounds than you'd get by starting from something larger.
Looking over a number of examples where the starting compounds can be compared to the final drugs (not a trivial data set to assemble, by the way), this work finds that drugs, compared to their corresponding leads, tend to have similar to slightly higher binding efficiencies, although there's a lot of variability. They also tend to have similar logP values, which is a finding that doesn't square with some previous analyses (which s