About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
August 31, 2012
Eli Lilly has been getting shelled with bad news recently. There was the not-that-encouraging-at-all failure of its Alzheimer's antibody solanezumab to meet any of its clinical endpoints. But that's the good news, since that (at least according to the company) it showed some signs of something in some patients.
We can't say that about pomaglumetad methionil (LY2140023), their metabotropic glutamate receptor ligand for schizophrenia, which is being halted. The first large trial of the compound failed to meet its endpoint, and an interim analysis showed that the drug was unlikely to have a chance of making its endpoints in the second trial. It will now disappear, as will the money spent on it so far. (The first drug project I ever worked on was a backup for an antipsychotic with a novel mechanism, which also failed to do a damned thing in the clinic, and which experience perhaps gave me some of the ideas I have now about drug research).
This compound is an oral prodrug of LY404039, which has a rather unusual structure. The New York Times did a story about the drug's development a few years ago, which honestly makes rather sad reading in light of the current news. It was once thought to have great promise. Note the cynical statement in that last link about how it really doesn't matter if the compound works or not - but you know what? It did matter in the end. This was the first compound of its type, an attempt at a real innovation through a new mechanism to treat mental illness, just the sort of thing that some people will tell you that the drug industry never gets around to doing.
And just to round things off, Lilly announced the results of a head-to-head trial of its anticoagulant drug Effient versus (now generic) Plavix in acute coronary syndrome. This is the sort of trial that critics of the drug industry keep saying never gets run, by the way. But this one was, because Plavix is the thing to beat in that field - and Effient didn't beat it, although there might have been an edge in long-term followup.
Anticoagulants are a tough field - there are a lot of patients, a lot of money to be made, and a lot of room (in theory) for improvement over the existing agents. But just beating heparin is hard enough, without the additional challenge of beating cheap Plavix. It's a large enough patient population, though, that more than one drug is needed because of different responses.
There have been a lot of critics of Lilly's research strategy over the years, and a lot of shareholders have been (and are) yelling for the CEO's head. But from where I sit, it looks like the company has been taking a lot of good shots. They've had a big push in Alzheimer's, for example. Their gamma-secretase inhibitor, which failed in terrible fashion, was a first of its kind. Someone had to be the first to try this mechanism out; it's been a goal of Alzheimer's research for over twenty years now. Solanezumab was a tougher call, given the difficulties that Elan (and Wyeth/Pfizer, J&J, and so on) have had with that approach over the years. But immunology is a black box, different antibodies do different things in different people, and Lilly's not the only company trying the same thing. And they've been doggedly pursuing beta-secretase as well. These, like them or not, are still some of the best ideas that anyone has for Alzheimer's therapy. And any kind of win in that area would be a huge event - I think that Lilly deserves credit for having the nerve to go after such a tough area, because I can tell you that I've been avoiding it ever since I worked on it in the 1990s.
But what would I have spent the money on instead? It's not like there are any low-risk ideas crowding each other for attention. Lilly's portfolio is not a crazy or stupid one - it's not all wild ideas, but it's not all full of attempts to play it safe, either. It looks like the sort of thing any big (and highly competent) drug research organization could have ended up with. The odds are still very much against any drug making it through the clinic, which means that having three (or four, or five) in a row go bad on you is not an unusual event at all. Just a horribly unprofitable one.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System
August 30, 2012
Here's yet another chance to play the human biology game that might as well be called "Now what?" That's when we find that what we thought we knew is actually wrong, more complicated, or a sign of something else entirely.
Today's entry is niacin. As many readers know, it looks like it should be a promising therapy for patients whose lipoproteins are out of whack. It lowers LDL, raises HDL, lowers free fatty acids, and lowers triglycerides, and all those things are supposed to be good. (As came up in the comments yesterday's post, though, the evidence is pretty strong for that first proposition, but not as solid for the others). Still, if you went around to thousands of cardiologists and asked them if they'd be interested in a therapy that did those four things, you'd get a resounding "Yes".
So why hasn't niacin taken over the world? Because of the side effects. It has to be taken in rather stiff doses to show the lipid effects, and those tend to cause a nasty skin flush reaction, which is apparently unpleasant enough that most people won't put up with it. Various attempts have been made to abrogate this, with the most direct assault being Merck's (failed) Cordaptive.
The flushing is thought to be mediated through the receptor GPR109A, via a prostaglandin pathway. Unfortunately, it's also believed that niacin's beneficial effects are mediated through that receptor, too, via some mechanism that starts with the lowering of free fatty acids. If you knock out the receptor in mice, you get no skin flushing, but no FFA lowering, either.
We must now revise that idea. A new paper tests that hypothesis with two non-niacin agonists, MK-1903 (a compound via Arena Pharmaceuticals, I believe) and SCH900271, and their effects in humans. They also report niacin's effects in the receptor knockout mice, claiming that although the FFA lowering does indeed disappear, that the downstream lipid effects remain. (That surprises me; I'd thought that had already been studied).
But the human data are especially revealing. The two new agonists do indeed show FFA effects, as you'd expect from compounds hitting GPR109A. But they do not show chronic free fatty acid lowering, nor do they have the desired downstream effects on blood lipids. So it appears inescapable that niacin's effects are going through some other pathway, one that doesn't depends on GPR109A or its (transient) free fatty acid lowering. Back to the drawing board everyone gets to go.
But niacin has been heading there already. Readers may remember a trial of a niacin-and-statin combination had to be stopped early because the cardiovascular effects were (alarmingly) going in the other direction. Not only was there no benefit, but there seemed to be active harm. Taken together, all this tells us that we have an awful lot to learn about some things that we thought we were starting to understand. . .
+ TrackBacks (0) | Category: Cardiovascular Disease
August 29, 2012
Nature is out today with a paper on the results of a calorie-restriction study that began in 1987. This one took place with rhesus monkeys at the National Institute of Aging, and I'll skip right to the big result: no increase in life span.
That's in contrast to a study from 2009 (also in rhesus) that did see an extension - but as this New York Times article details, there are a number of differences between the two studies that confound interpretation. For one thing, a number of monkeys that died in the Wisconsin study were not included in the results, since it was determined that they did not die of age-related causes. The chow mixtures were slightly different, as were the monkeys' genetic background. And a big difference is that the Wisconsin control animals were fed ad libitum, while the NIA animal were controlled to a "normal" level of calorie intake (and were smaller than the Wisconsin controls in the end).
Taken together with this study in mice, which found great variation in response to caloric restriction depending on the strain of mouse used, it seems clear that this is not one of those simple stories. It also complicates a great deal the attempts to link the effect of various small molecules to putative caloric restriction pathways. I used to think that caloric restriction was the bedrock result of the whole aging-and-lifespan research world - so now what? More complications, is what. Some organisms, under some conditions, do seem to show longevity effects. But unraveling what's going on is just getting trickier and trickier as time goes on.
I wanted to take a moment as well to highlight something that caught my eye in the Times article linked above. Here:
. . .Lab test results showed lower levels of cholesterol and blood sugar in the male monkeys that started eating 30 percent fewer calories in old age, but not in the females. Males and females that started dieting when they were old had lower levels of triglycerides, which are linked to heart disease risk. Monkeys put on the diet when they were young or middle-aged did not get the same benefits, though they had less cancer. But the bottom line was that the monkeys that ate less did not live any longer than those that ate normally. . .
Note that line about "benefits". The problem is, as far as I can see (Nature's site is down as I write), the two groups of monkeys appear to have shown the same broad trends in cardiovascular disease. And cardiovascular outcomes are supposed to be the benefits of better triglyceride numbers, aren't they? You don't just lower them to lower them, you lower them to see better health. More on this as I get a chance to see the whole paper. . .
+ TrackBacks (0) | Category: Aging and Lifespan | Cardiovascular Disease | Diabetes and Obesity
Here you go, from IKA. If you can make it up to about 1:52 or so, that's when the traditional hard-sell starts. But up until then, it's pretty painful, and not least because the model playing a chemist is evaporating a bright green solution (sure thing) and the receiving flask is light blue (oh yeah). More unlikely colors are to be seen in the sales-pitch part of the video that follows, though, but at least there's no acting, or whatever that's supposed to be. Yikes.
+ TrackBacks (0) | Category: Chemical News
Startup biopharma companies: they've gotta raise money, right? And the more money, the better, right? Not so right, according to this post by venture capitalist Bruce Booth. Companies need money, for sure, but above a certain threshold there's no correlation with success, either for the company's research portfolio or its early stage investors. (I might add that the same holds true for larger drug companies as well, for somewhat different reasons. Perhaps Pfizer's strategy over the last twenty years has had one (and maybe only one) net positive effect: it's proven that you cannot humungous your way to success in this business. And yes, since you ask, that's the last time I plan to use "humungous" as a verb for a while).
There's also a fascinating look back at FierceBiotech's 2007 "Top Deals", to see what became of the ten largest financing rounds on the list. Some of them have worked out, and some of them most definitely haven't: 4 of the ten were near-total losses. One's around break-even, two are "works in progress" but could come through, and three have provided at least 2x returns. (Read his post to attach names to these!) And as Booth shows, that's pretty much what you'd expect from the distribution over the entire biotech industry, including all the wild-eyed stuff and the riskiest small fry. Going with the biggest, most lucratively financed companies bought you, in this case, no extra security at all.
A note about those returns: one of the winners on the list is described as having paid out "modest 2x returns" to the investors. That's the sort of quote that inspires outrage among the clueless, because (of course) a 100% profit is rather above the market returns for the last five years. But the risk/reward ratio has not been repealed. You could have gotten those market returns by doing nothing, just by parking the cash in a couple of index funds and sitting back. Investing in startup companies requires a lot more work, because you're taking on a lot more risk.
It was not clear which of those ten big deals in 2007 would pay out, to put it mildly. In fact, if you take Booth's figures so far, an equal investment in each of the top seven companies on the list in 2007 would leave you looking at a slight net loss to date, and that includes one company that would have paid you back at about 3x to 4x. Number eight was the big winner on the list (5x, if you got out at the perfect peak, and good luck with that), and number 9 is the 2x return (while #10 is ongoing, but a likely loss). As any venture investor knows, you're looking at a significant risk of losing your entire investment whenever you back a startup, so you'd better (a) back more than one and (b) do an awful lot of thinking about which ones those are. This is a job for the deeply pocketed.
And when you think about it, a very similar situation obtains inside a given drug company. The big difference is that you don't have the option of not playing the game - something always has to be done. There are always projects going, some of which look more promising than others, some of which will cost more to prosecute than others, and some of which are aimed at different markets than others. You might be in a situation where there are several that look like they could be taken on, but your development organization can't handle so many. What to do? Partner something, park something that can wait (if anything can)?Or you might have the reverse problem, of not enough programs that look like they might work. Do you push the best of a bad lot forward and hope for the best? If not, do you still pay your development people even if they have nothing to develop right now, in the hopes that they soon will?
Which of these clinical programs of yours have the most risk? The biggest potential? Have you balanced those properly? You're sure to lose your entire investment on the majority - the great majority - of them, so choose as wisely as you can. The ones that make it through are going to have to pay for all the others, because if they don't, everyone's out of a job.
This whole process, of accumulating capital and risking it on new ventures, is important enough that we've named an entire economic system for it. It's a high-wire act. Too cautious, and you might not keep up enough to survive. Too risky, and you could lose too much. They do focus one's attention, such prospects, and the thought that other companies are out there trying to get a step on you helps keep you moving, too. It's not a pretty system, but it isn't supposed to be. It's supposed to work.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
August 28, 2012
There's an odd retraction in the synthetic chemistry literature. A synthesis of the lundurine alkaloid core from the Martin group at Texas was published last year, and its centerpiece was a double-ring-closing olefin metathesis reaction. (Coincidentally, that reaction was one of the "Black Swan" examples in the paper I blogged about yesterday - the initial reports of it from the 1960s weren't appreciated by the synthetic organic community for many years).
Now the notice says that the paper is being retracted because that RCM reaction is "not reproducible". (The cynical among you will already be wondering when that became a criterion for retraction in the literature - if it works once, it's in, right?)
There are more details at The Heterocyclist, a blog by the well-known synthetic chemist Will Pearson that I've been remiss in not highlighting before now. While you're there, fans of the sorts of chemicals I write about in "Things I Won't Work With" might enjoy this post on the high explosive RDX, and the Michigan chemist (Werner Bachmann) who figured out how to synthesize it on scale during World War II.
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
August 27, 2012
What's a Black Swan Event in chemistry? Longtime industrial chemist Bill Nugent has a very interesting article in Angewandte Chemie with that theme, and it's well worth a look. He details several examples of things that all organic chemists thought they knew that turned out not to be so, and traces the counterexamples back to their first appearances in the literature. For example, the idea that gold (and gold complexes) were uninteresting catalysts:
I completed my graduate studies with Prof. Jay Kochi at Indiana University in 1976. Although research for my thesis focused on organomercury chemistry, there was an active program on organogold chemistry, and our perspective was typical for its time. Gold was regarded as a lethargic and overweight version of catalytically interesting copper. More- over, in the presence of water, gold(I) complexes have a nasty tendency to disproportionate to gold(III) and colloidal gold(0). Gold, it was thought, could provide insight into the workings of copper catalysis but was simply too inert to serve as a useful catalyst itself. Yet, during the decade after I completed my Ph.D. in 1976 there were tantalizing hints in the literature that this was not the case.
One of these was a high-temperature rearrangement reported in 1976, and there was a 1983 report on gold-catalyzed oxidation of sulfides to sulfoxides. Neither of these got much attention, as the Nugent's own chart of the literature on the subject shows. (I don't pay much attention when someone oxidizes a sulfide, myself). Apparently, though, a few people had reason to know that something was going on:
However, analytical chemists in the gold-mining industry have long harnessed the ability of gold to catalyze the oxidation of certain organic dyes as a means of assaying ore samples. At least one of these reports actually predates the (1983) Natile publication. Significantly, it could be shown that other precious metals do not catalyze the same reactions, the assays are specific for gold. It is safe to say that the synthetic community was not familiar with this report.
I'll bet not. It wasn't until 1998 that a paper appeared that really got people interested, and you can see the effect on that chart. Nugent has a number of other similar examples of chemistry that appeared years before its potential was recognized. Pd-catalyzed C-N bond formation, monodentate asymmetric hydrogenation catalysts, the use of olefin metathesis in organic synthesis, non-aqueous enzyme chemistry, and many others.
So where do the black swans come into all this? Those familiar with Nasim Taleb's book
will recognize the reference.
The phrase “Black Swan event” comes from the writings of the statistician and philosopher Nassim Nicholas Taleb. The term derives from a Latin metaphor that for many centuries simply meant something that does not exist. But also implicit in the phrase is the vulnerability of any system of thought to conflicting data. The phrase's underlying logic could be undone by the observation of a single black swan.
In 1697, the Dutch explorer Willem de Vlamingh discovered black swans on the Swan River in Western Australia. Not surprisingly, the phrase underwent a metamorphosis and came to mean a perceived impossibility that might later be disproven. It is in this sense that Taleb employs it. In his view: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct an explanation for its occurrence after the fact, making it explainable and predictable.”
Taleb has documented this last point about human nature through historical and psychological evidence. His ideas remain controversial but seem to make a great deal of sense when one attempts to understand the lengthy interludes between the literature antecedents and the disruptive breakthroughs shown. . .At the very least, his ideas represent a heads up as to how we read and mentally process the chemical literature.
I have no doubt that unwarranted assumptions persist in the conventional wisdom of organic synthesis. (Indeed, to believe otherwise would suggest that disruptive break- throughs will no longer occur in the future.) The goal, it would seem, is to recognize such assumptions for what they are and to minimize the time lag between the appearance of Black Swans and the breakthroughs that follow.
One difference between Nugent's examples and Taleb's is the "extreme impact" part. I think that Taleb has in mind events in the financial industry like the real estate collapse of 2007-2008 (recommended reading here
), or the currency events that led to the wipeout of Long-Term Capital Management in 1998. The scientific literature works differently. As this paper shows, big events in organic chemistry don't come on as sudden, unexpected waves that sweep everything before them. Our swans are mute. They slip into the water so quietly that no one notices them for years, and they're often small enough that people mistake them for some other bird entirely. Thus the time lag.
How to shorten that? It'll be hard, because a lot of the dark-colored birds you see in the scientific literature aren't amazing black swans; they're crows and grackles. (And closer inspection shows that some of them are engaged in such unusual swan-like behavior because they're floating inertly on their sides). The sheer size of the literature now is another problem - interesting outliers are carried along in a flood tide of stuff that's not quite so interesting. (This paper mentions that very problem, along with a recommendation to still try to browse the literature - rather than only doing targeted searches - because otherwise you'll never see any oddities at all).
Then there's the way that we deal with such things even when we do encounter them. Nugent's recommendation is to think hard about whether you really know as much as you think you do when you try to rationalize away some odd report. (And rationalizing them away is the usual reponse). The conventional wisdom may not be as solid as it appears; you can probably put your foot through it in numerous places with a well-aimed kick. As the paper puts it: "Ultimately, the fact that something has never been done is the flimsiest of evidence that it cannot be done."
That's worth thinking about in terms of medicinal chemistry, as well as organic synthesis. Look, for example, at Rule-Of-Five type criteria. We've had a lot of discussions about these around here (those links are just some of the more recent ones), and I'll freely admit that I've been more in the camp that says "Time and money are fleeting, bias your work towards friendly chemical space". But it's for sure that there are compounds that break all kinds of rules and still work. Maybe more time and money should go into figuring out what it is about those drugs, and whether there are any general lessons we can learn about how to break the rules wisely. It's not that work in this area hasn't been done, but we still have a poor understanding of what's going on.
+ TrackBacks (0) | Category: Chemical News | Drug Industry History | The Scientific Literature | Who Discovers and Why
August 24, 2012
Lilly has reported results from its anti-amyloid antibody, solanezumab, and. . .well, it's mixed. And it's either quite good news, or quite bad. You make the call.
The therapy missed its endpoints (both "cognitive and functional", according to the company) in two clinical trials, so that's clearly bad news. Progression of Alzheimer's disease was not slowed. But I'll let the company's press release tell the tale from there:
The EXPEDITION1 study did not meet co-primary cognitive and functional endpoints in the overall mild-to-moderate patient population; however, pre-specified secondary subgroup analyses in patients with mild Alzheimer's disease showed a statistically significant reduction in cognitive decline. Based on those results, Lilly modified the statistical analysis plan (SAP) for EXPEDITION2 prior to database lock to specify a single primary endpoint of cognition in the mild patient population. This revised primary endpoint did not achieve statistical significance.
Now, this news - what you've just read above - actually is sending Lilly's stock up as I write this, which tells you how beaten-down Eli Lilly investors are, or how beaten-down investors in Alzheimer's therapies are. Or both. The headlines are all about how the drug missed in these trials, but that the company sees some hope. But man, is it ever a faint one.
What I'm taking away from the company's statement is that they had a cognition endpoint defined at the beginning of the trial (as well they should). We can assume that it was not a wildly optimistic one; no one is wildly optimistic in this field. And solanezumab missed it in the first Phase III data. But the patients with milder Alzheimer's, when they looked more closely, showed a trend towards efficacy, so they modified the endpoints (that is, lowered the bar and narrowed down to a select population) in the data for the second Phase III before it finished up. And even then, the antibody missed. So what we have are trends, possible trends, but nothing that really gets to the level of statistical significance.
But note, they're talking cognitive efficacy, and there's nothing said about those functional endpoints. If I'm interpreting this right, that means that there was a trend towards efficacy in tests like remembering words and lists of numbers, but not a trend when it came to actually performing better in real-life circumstances. Am I seeing this correctly? Lilly will be presenting more data in October, and we'll know more then. But I'm not getting an optimistic feeling from all this.
I assume that the company is now talking about going back and rounding up a population of the mildest Alzheimer's patients it can find and giving solanezumab another shot. Given Lilly's pipeline and situation, I suppose I'd do the same thing, but this is really a back-to-the-wall move. I think that you'd want to see something in a functional endpoint to really make a case for the drug, for one thing, and out in the real world, diagnosing Alzheimer's that early is not so easy, as far as I know. Good luck to them, but they are really going to need it.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
Over at Chemistry Blog, there's a post by Quintus on the synthesis of a complex natural product, FR-182877. The route is interesting in that it features a key Diels-Alder reaction, and the post mentions that this isn't a reaction that gets used much in industry.
True enough - that one and the Claisen rearrangement are the first reactions I think of in the category of "taught in every organic chemistry course, haven't run one in years". In the case of the Claisen, the number of years is now getting up to. . .hmm, about 26, I think. The Diels-Alder has shown up a bit more often for me, and someone in my lab was running one last year, but it was the first time she'd ever done it (after many years of drug discovery experience).
Why is that? The post I linked to suggested a good reason that one isn't done too often on scale: it can be unpredictably exothermic, and some of the reactants can decide to polymerize instead, which you don't want, either. That can be very exothermic, too, and leaves you with a reactor full of useless plastic gunk which will have to be removed with tools ranging from a scoop to a saw. This is a good time to adduce the benefits of flow chemistry, which has been successfully applied in such cases, and is worth thinking about any time you have a batch reaction that might take off on you.
But to scale something up, you need to have an interest in that structure to start with. There's another reason that you don't see so many Diels-Alders in drug synthesis, and it has to do with the sorts of molecules we tend to make. The cycloaddition gives you a three-dimensional structure with stereocenters, and medicinal chemistry, notoriously, tends to favor flat aromatic rings, sometimes very much to its detriment. Many drug discovery departments have taken the pledge over the years to try to cut back on the flatness and introduce more sp3 carbons, but it doesn't always take. (For one thing, if your leads are coming out of your screening collection, odds are you'll be starting with something on the flat end of the scale, because that's what your past projects filled the files with).
I think that fragment-based drug discovery has a better chance of giving you 3-D leads, but only if you pay attention while you're working on it. Those hits can sometimes be prosecuted in the flat-and-aryl style, too, if you insist. And I think it's fair to say that a lot of fragment hits have an aryl (especially a heteroaryl) ring in them, which might reflect the ease of assembling a fragment-sized library of compounds full of such. Even the fragment folks have been talking over the years about the need to get more three-dimensionality into the collections, and vendors have been pitching this as a feature of their offerings.
The other rap on the classic Diels-Alder reaction is that it gives you substituted cyclohexanes, which aren't always the first place you look for drug leads. But the hetero-Diels-Alder reactions can give you a lot of interesting compounds that look more drug-like, and I think that they deserve more play than they get in this business. I'll go ahead and take a public pledge to run a series of them before the year is out!
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
August 23, 2012
So here's a comment to this morning's post on stock buybacks, referring both to it and my replies to Donald Light et al. last week. I've added links:
Did you not spend two entire posts last week telling readers how only pharma "knows" how to do drug research and that we should "trust" them and their business model. Now you seem to say that they are either incompetent or conmen looking for a quick buck. So what is it? Does pharma (as it exists today) have a good business model or are they conmen/charlatans out for money? Do they "know" what they are doing? Or are they faking competence?
False dichotomy. My posts on the Donald Light business were mostly to demonstrate that his ideas of how the drug industry works are wrong. I was not trying to prove that the industry itself is doing everything right.
That's because it most certainly isn't. But it is the only biopharma industry we have, and before someone comes along with a scheme to completely rework it, one should ask whether that's a good idea. In this very context, the following quote from Chesterton has been brought up, and it's very much worth keeping in mind:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.
The drug industry did not arise out of random processes; it looks the way it does now because of a long, long series of decisions. Because we live in a capitalist system, many of these decisions were made to answer the question "Which way would make more money?" That is not guaranteed to give you the best outcome. But neither is it, as some people seem to think, a guarantee of the worst one. Insofar as the need for new and effective drugs is coupled to the ability to make money by doing so, I think the engine works about as well as anything could. Where these interests decouple (tropical diseases, for one), we need some other means.
My problem with stock buybacks is that I think that executives are looking at that same question ("Which way would make more money?") and answering it incorrectly. But under current market conditions, there are many values of "wrong". In the long run, I think (as does Bruce Booth) that it would be more profitable, both for individual companies and for the industry as a whole, to invest more in research. In fact, I think that's the only thing that's going to get us out of the problems that we're in. We need to have more reliable, less expensive ways to discover and develop drugs, and if we're not going to find those by doing research on how to make them happen, then we must be waiting for aliens to land and tell us.
But that long run is uncertain, and may well be too long for many investors. Telling the shareholders that Eventually Things Will Be Better, We Think, Although We're Not Sure How Just Yet will not reassure them, especially in this market. Buying back shares, on the other hand, will.
+ TrackBacks (0) | Category: Business and Markets | Drug Development
Bruce Booth has an excellent look at a topic we were discussing around here earlier this year: stock buybacks in biopharma. I didn't have a lot of good things to say about the concept. I understand that corporations have obligations to their shareholders, and I certainly understand that a stock buyback is about the least controversial thing a big company can do with its money. Paying shareholders through dividends has tax consequences. But you can't sit on a big pile of cash forever, and what are you supposed to do if you think that market returns will beat the return on investment in your own business?
That brings up another, larger question: if you truly believe that last part, how long do you think that situation will obtain? And how long are you willing to put up with it? If a business really, truly, can't deliver returns that could be realized through a reasonable investment strategy, then why is it in business to start with? (I've seen discussions among economists about this very point when applied to many small businesses).
Booth wonders about the use of capital, too:
In recent years, plowing it back into internal R&D hasn’t been the preferred option given pipeline productivity questions. Returning capital to shareholders via dividends has certainly been high on the list. Another, albeit indirect, way of paying shareholders is through share repurchases (stock buybacks), and it has also been quite popular. The expectation (or hope) with these indirect stock buybacks is that the stock will move upwards because the shares oustanding goes down (or at least the buybacks offset the dilution from the exercise of options).
But buybacks have a more mixed assessment in practice (links at his site - DBL) and are typically only a smart if a company is (a) under-valued and (b) has no better uses of capital. This latter point is where they draw my ire, especially given their scale in our industry and the many strategic alternatives.
Totaling up the buybacks gives you some humongous figures. One thing that I'm not quite sure about with these numbers is whether all these buybacks are actually followed through. You'd think there would be legal consequences if the discrepancy grosw too large, but I don't know the law on this topic. But taking the figures as we have them, you get this:
To appreciate the magnitude of these buybacks, it’s worth comparing them to other important financial values in the biopharma ecosystem. It’s bigger than the NIH budget for both 2011-2012 by nearly 25%. It’s 4.5x bigger than all of the private venture-backed M&A that occurred in the past 18 months – and that involved over 70 biotech companies. It’s 12x bigger than the sum total of venture dollars invested in biotech in that period. And it's nearly 80x bigger than all the capital raised by fifteen biotech IPOs during that period. This is a huge amount of capital washing into stock repurchases.
The problem is, as Booth goes on to show, is that there's no particular correlation (that anyone can see) between these buybacks and the performance of the stocks themselves. (You could always say that they'd have performed even worse without the buybacks, an unanswerable and untestable point). He's got some other suggestions for the money, and he's not even asking for all of it. Or half of it. Or a tenth. Five per cent of the buyback pool would totally alter the funding universe for early-stage companies and precompetitive consortia. In other words, potentially alter the future of the whole industry. But we're not doing that. We're buying our own shares. Tens of billions of dollars of our own shares, because we can't seem to think of anything better to do.
+ TrackBacks (0) | Category: Business and Markets
August 22, 2012
Here's the Suzuki reaction taken down about as far as it can go: two boron groups on one carbon.
I didn't even know that you could make those things - no doubt someone will be inspired to try the three-boron version next. Diarylmethanes aren't the most preferred drug structures in the world (that carbon is just waiting to be oxidized), but I can't say that I've always avoided them on those grounds. I was on a project where we made a whole series of the things, actually - didn't work out so well for the intended target, but the compounds went on to hit in a completely different assay, so the company did probably get its money's worth.
+ TrackBacks (0) | Category: Chemical News
Hang around a bunch of medicinal chemists (no, really, it's more fun than you'd think) and you're bound to hear discussion of cLogP. For the chemists in the crowd, I should warn you that I'm about to say nasty things about it.
For the nonchemists in the crowd, logP is a measure of how greasy (or how polar) a compound is. It's based on a partition experiment: shake up a measured amount of a compound with defined volumes of water and n-octanol, a rather greasy solvent which I've never seen referred to in any other experimental technique. Then measure how much of the compound ends up in each layer, and take the log of the octanol/water ratio. So if a thousand times as much compound goes into the octanol as goes into the water (which for drug substances is quite common, in fact, pretty good), then the logP is 3. The reason we care about this is that really greasy compounds (and one can go up to 4, 5, 6, and possibly beyond), have problems. They tend to dissolve poorly in the gut, have problems crossing membranes in living systems, get metabolized extensively in the liver, and stick to a lot of proteins that you'd rather they didn't stick to. Fewer high-logP compounds are capable of making it as drugs.
So far, so good. But there are complications. For one thing, that description above ignores the pH of the water solution, and for charged compounds that's a big factor. logD is the term for the distribution of all species (ionized or not), and logD at pH 7.4 (physiological) is a valuable measurement if you've got the possibility of a charged species (and plenty of drug molecules do, thanks to basic amines, carboxylic acids, etc.) But there are bigger problems.
You'll notice that the experiment outlined in the second paragraph could fairly be described as tedious. In fact, I have never seen it performed. Not once, and I'll bet that the majority of medicinal chemists never have, either. And it's not like it's just being done out of my sight; there's no roomful of automated octanol/water extraction machines clanking away in the basement. I should note that there are other higher-throughput experimental techniques (such as HPLC retention times) that also correlate with logP and have been used to generate real numbers, but even those don't account for the great majority of the numbers that we talk about all the time. So how do we manage to do that?
It has to do with a sleight of hand I've performed while writing the above sections, which some of you have probably already noticed. Most of the time, when we talk about logP values in early drug discovery, we're talking about cLogp. That "c" stands for calculated. There are several programs that estimate logP based on known values for different rings and functional groups, and with different algorithms for combining and interpolating them. In my experience, almost all logP numbers that get thrown around are from these tools; no octanol is involved.
And sometimes that worries me a bit. Not all of these programs will tell you how solid those estimates are. And even if they will, not all chemists will bother to check. If your structure is quite close to something that's been measured, then fine, the estimate is bound to be pretty good. But what if you feed in a heterocycle that's not in the lookup table? The program will spit out a number, that's what. But it may not be a very good number, even if it goes out to two decimal places. I can't even remember when I might have last seen a cLogP value with a range on it, or any other suggestion that it might be a bit fuzzy.
There are more subtle problems, too - I've seen some oddities with substitutions on saturated heterocyclic rings (morpholine, etc.) that didn't quite seem to make sense. Many chemists get these numbers, look at them quizzically, and say "Hmm, I didn't know that those things sorted out like that. Live and learn!" In other words, they take the calculated values as reality. I've even had people defend these numbers by explaining to me patiently that these are, after all, calculated logP values, and the calculated log P values rank-order like so, and what exactly is my problem? And while it's hard to argue with that, we are not putting our compounds into the simulated stomachs of rationalized rodents. Real-world decisions can be made based on numbers that do not come from the real world.
+ TrackBacks (0) | Category: Drug Assays | In Silico | Life in the Drug Labs
August 21, 2012
This paper from GlaxoSmithKline uses a technology that I find very interesting, but it's one that I still have many questions about. It's applied in this case to ADAMTS-5, a metalloprotease enzyme, but I'm not going to talk about the target at all, but rather, the techniques used to screen it. The paper's acronym for it is ELT, Encoded Library Technology, but that "E" could just as well stand for "Enormous".
That's because they screened a four billion member library against the enzyme. That is many times the number of discrete chemical species that have been described in the entire scientific literature, in case you're wondering. This is done, as some of you may have already guessed, by DNA encoding. There's really no other way; no one has a multibillion-member library formatted in screening plates and ready to go.
So what's DNA encoding? What you do, roughly, is produce a combinatorial diversity set of compounds while they're attached to a length of DNA. Each synthetic step along the way is marked by adding another DNA sequence to the tag, so (in theory) every compound in the collection ends up with a unique oligonucleotide "bar code" attached to it. You screen this collection, narrow down on which compound (or compounds) are hits, and then use PCR and sequencing to figure out what their structures must have been.
As you can see, the only way this can work is through the magic of molecular biology. There are so many enzymatic methods for manipulating DNA sequences, and they work so well compared with standard organic chemistry, that ridiculously small amounts of DNA can be detected, amplified, sequenced, and worked with. And that's what lets you make a billion member library; none of the components can be present in very much quantity (!)
This particular library comes off of a 1,3,5-triazine, which is not exactly the most cutting-edge chemical scaffold out there (I well recall people making collections of such things back in about 1992). But here's where one of the big questions comes up: what if you have four billion of the things? What sort of low hit rate can you not overcome by that kind of brute force? My thought whenever I see these gigantic encoded libraries is that the whole field might as well be called "Return of Combichem: This Time It Works", and that's what I'd like to know: does it?
There are other questions. I've always wondered about the behavior of these tagged molecules in screening assays, since I picture the organic molecule itself as about the size of a window air conditioner poking out from the side of a two-story house of DNA. It seems strange to me that these beasts can interact with protein targets in ways that can be reliably reproduced once the huge wad of DNA is no longer present, but I've been assured by several people that this is indeed the case.
In this example, two particular lineages of compounds stood out as hits, which makes you much happier than a collection of random singletons. When the team prepared a selection of these as off-DNA "real organic compounds", many of them were indeed nanomolar hits, although a few dropped out. Interestingly, none of the compounds had the sorts of zinc-binding groups that you'd expect against the metalloprotease target. The rest of the paper is a more traditional SAR exploration of these, leading to what one has to infer are more tool/target validation compounds rather than drug candidates per se.
I know that GSK has been doing this sort of thing for a while, and from the looks of it, this work itself was done a while ago. For one thing, it's in J. Med. Chem., which is not where anything hot off the lab bench appears. For another, several of the authors of the paper appear with "Present Address" footnotes, so there has been time for a number of people on this project to have moved on completely. And that brings up the last set of questions, for now: has this been a worthwhile effort for GSK? Are they still doing it? Are we just seeing the tip of a large and interesting iceberg, or are we seeing the best that they've been able to do? That's the drug industry for you; you never know how many cards have been turned over, or why.
+ TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays | Drug Industry History
There's no telling if this is true - it's part of a lawsuit. But a former Genentech employee is claiming that the company rushed trials of its PI3K inhibitor. And why? Worries about their partner:
The suit alleges that the Pi3 Kinase team was guilty of "illegal and unethical conduct" by skirting established scientific and ethical standards required of drug researchers. Juliet Kniley claims she complained in 2008 and then was sidelined in 2009 with a demotion after being instructed to push ahead on the study. And she says she was told twice that Roche would "take this molecule away from us" if they saw her proposed timelines.
Genentech denies the allegations. But you have to wonder if there's still a window here into the relationship between the two companies. . .
+ TrackBacks (0) | Category: Drug Development
August 20, 2012
The controversy I wrote about last week, about whether (some) enzymes work by using extremely fast movements (rather than by putting things into their place and letting them do their thing) may remind some folks of the supposed medieval arguments about angels dancing on the heads of pins. But it also reminds me a bit of some other arguments in organic chemistry over the years. The horrible prototype is, of course, the norbornyl cation.
There was a time when people would simply leave the room when that topic came up, because they knew that they were in for another round of fruitless wrangling. Was its structure that of two rapidly interconverting standard carbocations, or a single bridged "non-classical" one that broke the previously accepted rules? George Olah and H. C. Brown, Nobel laureates both, were on opposite sides of that one, but every physical organic chemist from about 1950 to about 1980 probably had to take a stand one way or the other. (It is commonly accepted that Olah's side won), but the arguments got pretty esoteric by the end. Update: the battle was first joined by Saul Winstein, who did not live to see his proposal vindicated by Olah's spectroscopic studies).
Another one, which came along a few years later, was the "synchronous / asynchronous" mechanism of the Diels-Alder reaction. Do the new bonds in that one form at the same time, or does one form, and then the other? That one involved the physical organic people again, as well as plenty of computational chemists. I stopped following the debate after a while, but I believe that the final reckoning was that most standard Diels-Alder reactions were synchronous, within the limits of detection, but that messing with the electron density of the two reactants could easily push the reaction into asynchronous (or flat-out stepwise) territory.
So why does this level of detail matter? The problem is, chemistry is all about things like bond formation and bond breaking, and about interactions between individual molecules (and parts of molecules) that change the energies of the systems involved. And those things are nothing but picky details, all the way down. Thermodynamics, which runs chemical reactions and runs the rest of the universe, is the most rigorous branch of accounting there is. Totaling up those energies to see which side of the ledger wins out can easily involve the fate of single water molecules, or even to single protons, and you don't get much pickier than that.
This sort of thing is one argument used against the feasibility of molecular nanotechnology. How are we to harness such fine distinctions, at such levels? But it's worth remembering that we ourselves, and every other living creature, are nanotech machines at heart. Our enzymes are constantly breaking bonds, twisting single molecules, altering reaction rates, and generating specific, defined molecular products. If they weren't, we'd fall right over. We eventually fall over anyway, because none of these machines work perfectly. But they work pretty well, and they make our own chemical efforts look like stone axes and deer-bone hammers.
So we may find getting down to this level of things to be a lot of work, and hard to understand, and frustrating to deal with. But that's where we're going to have to be if we're ever going to do real chemistry, the kind that's that's indistinguishable from magic.
+ TrackBacks (0) | Category: Chemical News
August 17, 2012
All right, there's been another ruling in the Myriad gene patent case, involving genetic testing for the BRCA mutations in breast cancer. There's been a lot of coverage of this, but not all of it gets the details right. And there are a lot of details, so here goes. First off, here is the latest court opinion (PDF), courtesy of the ACLU, which was a party in this case. (That alone should tell you how involved this has become over the years). Allow me to summarize:
Myriad, and others, began offering genetic testing for these mutations in the mid-1990s. The company obtained several patents directed to the gene sequences and methods of assaying them, and informed other players in this field that they were in violation of Myriad intellectual property, specifically the University of Pennsylvania's Genetic Diagnostic Laboratory. Their cease-and-desist letters did not apply to research uses, only to commercial testing for which money was charged. By 1999, the GDL had stopped testing, and Myriad was now the only company in the US carrying out this diagnostic assay.
The plaintiffs in this case are a wide range of people, ranging from the Penn lab and others who wanted to offer BRCA testing, through people who claimed that they had been denied the opportunity to have such a test done through Myriad's exercise of its patents. The lawsuit went through a challenge in district court about whether the plaintiffs had standing to bring suit in the first place, which is an issue that came up again in this appeal, but I'm going to skip over that. It's certainly of potential interest to attorneys in the field, but doesn't get at the scientific and technical end of the case.
To that, then. the district court decision went against Myriad:
The district court held for Plaintiffs, concluding that the fifteen challenged claims were drawn to non-patentable subject matter and thus invalid under § 101. SJ Op., 702 F. Supp. 2d at 220-37. Regarding the composition claims, the court held that isolated DNA molecules fall within the judicially created “products of nature” exception to § 101 because such isolated DNAs are not “markedly different” from native DNAs. Id. at 222, 232 (quoting Diamond v. Chakrabarty, 447 U.S. 303 (1980)). The court relied on the fact that, unlike other biological molecules, DNAs are the “physical embodiment of information,” and that this information is not only preserved in the claimed isolated DNA molecules, but also essential to their utility as molecular tools.
Turning to the method claims, the court held them patent ineligible under this court’s then-definitive machine-or-transformation test. . .The court held that the claims covered “analyzing” or “comparing” DNA sequences by any method, and thus covered mental processes independent of any physical transformations. Id. at 233-35. In so holding, the court distinguished Myriad’s claims from those at issue in Mayo based on the “determining” step in the latter being construed to include the extraction and measurement of metabolite levels from a patient sample. SJ Op., 702 F. Supp. 2d at 234-35 (citing Prometheus Labs., Inc. v. Mayo Collaborative Servs., 628 F.3d 1347, 1350 (Fed. Cir. 2010), rev’d, 132 S. Ct. 1289 (2012)). Alternatively, the court continued, even if the claims could be read to include the transformations associated with isolating and sequencing human DNA, these transformations would constitute no more than preparatory data-gathering steps. Id. at 236 (citing In re Grams, 888 F.2d 835, 840 (Fed. Cir. 1989)). Finally, the court held that the one method claim to “comparing” the growth rate of cells claimed a basic scientific principle and that the transformative steps amounted to only preparatory data gathering.
I didn't even bother going into detail on this decision at the time, because I expected it to be appealed immediately, and so it was. As you'll see from the comments to that post, though, opinions varied widely about the chances for a reversal, but my informal reading was that the more someone knew about patent law, the more they expected the appeals court to reverse.
And that's just what has happened. The problem is, this isn't as wide-ranging a decision as some people (and some headlines) seem to think it is. I'll quote from the latest opinion again:
. . .it is important to state what this appeal is not about. It is not about whether individuals suspected of having an in- creased risk of developing breast cancer are entitled to a second opinion. Nor is it about whether the University of Utah, the owner of the instant patents, or Myriad, the exclusive licensee, has acted improperly in its licensing or enforcement policies with respect to the patents. The question is also not whether is it desirable for one company to hold a patent or license covering a test that may save people’s lives, or for other companies to be excluded from the market encompassed by such a patent—that is the basic right provided by a patent, i.e., to exclude others from practicing the patented subject matter. It is also not whether the claims at issue are novel or nonobvious or too broad. Those questions are not before us. It is solely whether the claims to isolated BRCA DNA, to methods for comparing DNA sequences, and to a process for screening potential cancer therapeutics meet the threshold test for patent-eligible subject matter under 35 U.S.C. § 101 in light of various Supreme Court holdings, particularly including Mayo. The issue is patent eligibility, not patentability.
In other words, this decision is not designed to address the big issues that so many people think that it has. The court goes on to say, as courts at this level often do, that if someone wants to do something about all these things, then the procedure for remedy is clear:
We would further note, in the context of discussing what this case is not about, that patents on life-saving material and processes, involving large amounts of risky investment, would seem to be precisely the types of subject matter that should be subject to the incentives of exclusive rights. But disapproving of patents on medical methods and novel biological molecules are policy questions best left to Congress, and other general questions relating to patentability and use of patents are issues not before us. . .
Still, even if we're talking about patent eligibility and not patentability per se, we still have a tough question here. Myriad says that the isolated DNA molecules that their patent is directed towards are not found in nature as such, that they have to be manipulated and isolated through human ingenuity, and that they (as opposed to native DNA) can be used in their diagnostic applications. They claim that the district court erred in focusing on the informational content of the molecules, and not the actual composition of matter itself. The plaintiffs argue that the isolated DNA molecules have to have a "distinctive name, character, and use", as the law reads, and that they are not "markedly different" enough from the natural substance, especially since (as they hold) the entire point of them is the informational sequence they represent.
The appeals court comes down in favor of Myriad here. A key part of their argument rests on the decision in the Chakrabarty case involving the patenting of genetically engineered bacteria, so if you didn't like that one, you're not going to like this. The court finds that isolated DNA molecules - unwound from their histones, cleaved at both ends, truncated - are "marked different" enough to be eligible for patents:
. . .Accordingly, BRCA1 and BRCA2 in their isolated states are different molecules from DNA that exists in the body; isolated DNA results from human intervention to cleave or synthesize a discrete portion of a native chromosomal DNA, imparting on that isolated DNA a distinctive chemical identity as compared to native DNA.
As the above description indicates, isolated DNA is not just purified DNA. Purification makes pure what was the same material, but was combined, or contaminated, with other materials. Although isolated DNA is removed from its native cellular and chromosomal environment, it has also been manipulated chemically so as to produce a molecule that is markedly different from that which exists in the body. . .
They go on to say that "an isolated DNA molecule is not a purified form of a natural material, but a distinct chemical entity that is obtained by human intervention". As you might imagine, cDNAs are found under this reasoning to be especially far from nature, and these are already held to be patentable. As to the "informational content" argument that carried the day in the lower court, the appeals court has this to say:
. . .We disagree, as it is the distinctive nature of DNA molecules as isolated compositions of matter that determines their patent eligibility rather than their physiological use or benefit. Uses of chemical substances may be relevant to the nonobviousness of these substances or to method claims embodying those uses, but the patent eligibility of an isolated DNA is not negated because it has similar informational properties to a different, more complex natural material. The claimed isolated DNA molecules are distinct from their natural existence as portions of larger entities, and their informational content is irrelevant to that fact. We recognize that biologists may think of molecules in terms of their uses, but genes are in fact materials having a chemical nature and, as such, are best described in patents by their structures rather than by their functions. . .
In other words, this ruling affirms that molecular biology is, in fact, chemistry, if you want to look at it that way. The court goes on to say that if we as a society want to put DNA in a separate category for terms of patent law (because of its unique informational content, etc.), then Congress should get to work on revising the US Code. It's not a matter for the courts to write that in by themselves. The opinion also rejects arguments (made in the dissenting opinion) that make analogies to snipped a leaf off a tree or removing an organ from a human body. These, they say, are not specific, defined substances, but an isolated DNA molecule most certainly is.
There, that's the first part of the opinion. There's another section as to the methods of use, but I think this is enough legal matter for one day around here. And there's plenty of arguing room staked out already!
+ TrackBacks (0) | Category: Patents and IP
I wanted to mention that a version of my first post on the Light/Lexchin article is now up over at the Discover magazine site. And if you've been following the comments to that one and to Light's response here, you'll note that readers here have found a number of problems with the original paper's analysis. I've found a few of my own, and I expect there are more.
The British Medical Journal has advised me that they consider a letter to the editor to be the appropriate forum for a response to one of their published articles. I don't think publishing this one did them much credit, but what's done is done. I'm still shopping for a venue for a detailed response on my part - I've had a couple of much-appreciated offers, but I'd like to continue to see what options are out there to get this out to the widest possible audience.
+ TrackBacks (0) | Category: Drug Development | Drug Prices
August 16, 2012
How do enzymes work? People have been trying to answer that, in detail, for decades. There's no point in trying to do it without running down all those details, either, because we already know the broad picture: enzymes work by bringing reactive groups together under extremely favorable conditions so that reaction rates speed up tremendously. Great! But how do they bring those things together, how does their reactivity change, and what kinds of favorable conditions are we talking about here?
And some of this we know, too. You can see, in many enzyme active sites, that the protein is stabilizing the transition state of the reaction, lowering its energy so it's easier to jump over the hump to product. It wouldn't surprise me to see the energies of some starting materials being raised to effect that same barrier-lowering, although I don't know of any examples of that off the top of my head. But even this level of detail raises still more questions: what interactions are these that lower and raise these energies? How much of a price is paid, thermodynamically, to do these things, and how does that break out into entropic and enthalpic terms?
Some of those answers are known, to some degree, in some systems. But still more questions remain. One of the big ones has been the degree to which protein motion contributes to enzyme action. Now, we can see some big conformational changes taking place with some proteins, but what about the normal background motions? Intellectually, it makes sense that enzymes would have learned, over the millennia, to take advantage of this, since it's for sure that their structures are always vibrating. But proving that is another thing entirely.
Modern spectroscopy may have done the trick. This new paper from groups at Manchester and Oxford reports painstaking studies on B-12 dependent ethanolamine ammonia lyase. Not an enzyme I'd ever heard of, that one, but "enzymes I've never heard of" is a rather roomy category. It's an interesting one, though, partly because it goes through a free radical mechanism, and partly because it manages to speed things up by about a trillion-fold over the plain solution rate.
Just how it does that has been a mystery. There's no sign of any major enzyme conformational change as the substrate binds, for one thing. But using stopped-flow techniques with IR spectroscopy, as well as ultrafast time-resolved IR, there seem to be structural changes going on at the time scale of the actual reaction. It's hard to see this stuff, but it appears to be there - so what is it? Isotopic labeling experiments seem to say that these IR peaks represent a change in the protein, not the B12 cofactor. (There are plenty of cofactor changes going on, too, and teasing these new peaks out of all that signal was no small feat).
So this could be evidence for protein motion being important right at the enzymatic reaction itself. But I should point out that not everyone's buying that. Nature Chemistry had two back-to-back articles earlier this year, the first advocating this idea, and the second shooting it down. The case against this proposal - which would modify transition-state theory as it's usually understood - is that there can be a number of conformations with different reactivities, some of which take advantage of quantum-mechanical tunneling effects, but all of which perform "traditional" transition-state chemistry, each in their own way. Invoking fast motions (on the femtosecond time scale) to explain things is, in this view, a layer of complexity too far.
I realize that all this can sound pretty esoteric - it does even to full-time chemists, and if you're not a chemist, you probably stopped reading quite a while ago. But we really do need to figure out exactly how enzymes do their jobs, because we'd like to be able to do the same thing. Enzymatic reactions are, in most cases, so vastly superior to our own ways of doing chemistry that learning to make them to order would revolutionize things in several fields at once. We know this chemistry can be done - we see it happen, and the fact that we're alive and walking around depends on it - but we can't do it ourselves. Yet.
+ TrackBacks (0) | Category: Biological News | Chemical News
After mentioning the natural product Shootmenowicene yesterday, I note that See Arr Oh is reporting that the total synthesis of this compound is now down to only 47 steps. I think the purity could be improved with a prep GC of one of the early intermediates (or perhaps a spinning band distillation), but that's about all his synthesis is missing. . .
+ TrackBacks (0) | Category: Chemical News
August 15, 2012
Yesterday's post on reproducing scientific results got me to thinking about the application of this to organic chemistry. How much of this are we going to see, compared to biology?
Not as much, is my guess. Some of the barriers to reproducibility are too low to bother with, while others are too high. In the "too low" category are many new synthetic method papers. People try these things out, if they look useful at all, and they either work or they don't. Most of the time, you end up finding the limits of the reported method - your substrate failed dismally, but when you look, you realize that you had a basic tertiary amine in your molecule, and none of the examples in the paper have one. Ah-hah.
It's rare that a useful-looking reaction turns out to be completely non-reproducible across multiple structures (although it has happened). Here's a paper from 2000, by one Vincent C. O. Njar, claiming that carbonyl diimidazole reacted with hydroxy groups to give direct N-alkylation of imidazole. Two years later, Walter Fischer from Ciba Specialty Chemicals took this paper apart in detail, showing that it did not work and could not have worked. The products were carbamates instead - not surprising - and the original author should have realized this (as should the referees of the paper).
Then you have total synthesis. And here, the barrier is too high: no one is going to reproduce these things after a certain point. A 48-step synthesis of Shootmenowicene could appear tomorrow, and the odds are overwhelming that no one will ever explore its heights again. There have been total syntheses that have been received with grave doubts (hexacyclinol!), but no one, to the best of my knowledge, has gone back over every step of one of these. The return on the investment of time and money is just too low - which, to be frank, is a sentence that sums up my opinion of a lot of total synthesis work these days.
Where the Reproducibility Initiative could come in handy inside organic chemistry, though, would be for unusual things of wide applicability that are still hard to believe. The famous "NMR chirality" scandal at the University of Bonn in the 1990s would be a good example of this. This was a startling result - that the chirality of organic reactions could be measurably influenced by the handedness of an applied magnetic field - and many people had trouble believing it on physical grounds. They were right, too, because it all turned out to be faked by an individual inside the group, a fact that was only discovered after much effort and embarrassment. Having immediate access to third-party reproducibility testing would have sped things up quite a bit - and perhaps if that access is more widely known, used, and appreciated, we might see fewer bizarre cases like this in general.
+ TrackBacks (0) | Category: The Scientific Literature
I wanted to let people know that I'm working on a long, detailed reply to Donald Light's take on drug research, but that I'm also looking at a few other publication venues for it. More on this as it develops.
But in trying to understand his worldview (and Marcia Angell's, et al.), I think I've hit on at least one fundamental misconception that these people have. All of them seem to think that the key step in drug discovery is target ID - once you've got a molecular target, you're pretty much home free, and all that was done by NIH money, etc., etc. It seems that these people have a very odd idea about high-throughput screening: they seem to think that we screen our vast collections of molecules and out pops a drug.
Of course, out is what a drug does not pop, if you follow my meaning. What pops out are hits, some of which are not what they say on the label any more. And some of the remaining ones just don't reproduce when you run the same experiment again. And even some of the ones that do reproduce are showing up as hits not because they're affecting your target, but because they're hosing up your assay by some other means. Once you've cleared all that underbrush out, you can start to talk about leads.
Those lead molecules are not created equal, either. Some of them are more potent than others, but the more potent ones might be much higher molecular weights (and thus not as ligand efficient). Or they might be compounds from another project and already known to hit a target that you don't want to hit. Once you pick out the ones that you actually want to do some chemistry on, you may find, as you start to test new molecules in the series, that some of them have more tractable structure-activity relationships than others. There are singletons out there, or near-singletons: compounds that have some activity as they stand, but for which every change in structure represents a step down. The only way to find that out is to test analogs. You might have some more in your files, or you might be able to buy some from the catalogs. But in many cases, you'll have to make them yourself, and a significant number of those compounds you make will be dead ends. You need to know which ones, though, so that's valuable information.
Now you're all the way up to lead series territory, a set of compounds that look like they can be progressed to be more potent and more selective. As medicinal chemists know, though, there's more to life. You need to see how these compounds act on real cells, and in real animals. Do they attain reasonable blood levels? Why or why not? What kinds of metabolites do they produce - are those going to cause trouble? What sort of toxicity do you see at higher doses, or more long-running ones? Is that related to your mechanism of action (sorry to hear it!), or something off-target to do with that particular structure? Can you work your way out of that problem with more new compound variations without losing all of what you've been building in so far? Prepare to go merrily chasing down some blind alleys while you work all this stuff out; the lights are turned off inside the whole maze, and the only illumination is what you can bring yourself.
Now let's assume that you've made it far enough to narrow down to one single compound, the clinical candidate. The fun begins! How about formulations - can this compound be whipped up into a solid form that resembles a real drug that people can put in their mouths, leave on their medicine cabinet shelves, and stock in their warehouses and pharmacies? Can you make enough of the compound to get to that stage, reliably? Most of the time the chemistry has to change at that point, and you'd better hope that some tiny new impurities from the new route aren't going to pop up and be important. You'd really better hope that some new solid form (polymorph) of your substance doesn't get discovered during that new route, because some of those are bricks and their advent is nearly impossible to predict.
Hey, now it's time to go to the clinic. Break out the checkbook, because the money spent here is going to make the preclinical expenses look like roundoff errors. Real human beings are going to take your compound, and guess what? Of all the compounds (the few, the proud) that actually get this far, all the way up to some volunteer's tongue. . .well, a bit over ninety per cent of those are going to fail in trials. Good luck!
While you're nervously checking the clinical results (blood levels and tolerability in Phase I), you have more questions to ask. Do you have good commercial suppliers for all the starting materials, and the right manufacturing processes in place to make the drug, formulate it, and package it? High time you thought about that stuff; your compound is about to go into the first sick humans it's ever seen, in Phase II. You finally get to find out if that target, that mechanism, actually works in people. And if it does (congratulations!), then comes the prize. You get to spend the real money in Phase III: lots and lots of patients, all sorts of patients, in what's supposed to be a real-world shakedown. Prepare to shell out more than you've spent in the whole process to date, because Phase III trials will empty your pockets for sure.
Is your compound one of the five or ten out of a hundred that makes it through Phase III? Enjoy the sensation, because most medicinal chemists experience that only once in their careers, if that. Now you're only a year or two away from getting your drug through the FDA and seeing if it will succeed or fail on the market. And good luck there, too. Contrary to what you might read, not all drugs earn back their costs, so the ones that do had better earn out big-time.
There. That wasn't so easy, was it? And I know that I've left things out, too. The point of all this is that most people have no idea of all these steps - what they're like, how long they can take, that they even exist. It wouldn't surprise me if many people imagine drug discovery, when they imagine it at all, to be the reach-in-the-pile-and-find-a-drug process that I mentioned in the second paragraph. Everything else is figuring out what color to make the package and how much to overcharge for it.
That's why I started this blog back in 2002 - because I was spending all my time on a fascinating, tricky, important job that no one seemed to know anything about. All these details consume the lives and careers of vast numbers of researchers - it's what I've been doing since 1989 - and I wanted, still want, to let people know that we exist.
In the meantime, for the Donald Lights of the world, the Marcia Angells, and the people who repeat their numbers despite apparently knowing nothing about how drugs actually get developed - well, here are some more details for you. The readers of this site with experience in the field will be able to tell you if I haven't described it pretty much as it is. It's not like I and others haven't tried to tell you before.
+ TrackBacks (0) | Category: Drug Development | Drug Prices
August 14, 2012
We've spoken several times around here about the problems with reproducing work in the scientific literature. You have to expect some slippage on cutting-edge work, just because it's very complex and is being looked at for the first time. But at the same time, it's that sort of work that we're depending on to advance a field, so when it turns out to be wrong, it causes more damage than something older and more obscure that falls apart.
There's a new effort which is trying to attack the problem directly. Very directly. The Reproducibility Initiative is inviting people to have their work independently confirmed by third-party researchers. You'll be responsible for the costs, but at the end of it, you'll have a certification that your results have been verified. The validation studies themselves will be published in the new PLOS ONE Reproducibility Collection, and several leading publishers have agreed to link the original publications back to this source.
I very much hope that this catches on. The organizers have rounded up an excellent advisory committee, with representatives from academia and industry, both of whom would be well served by more accurate scientific publication. I can especially see this being used when someone is planning to commercialize some new finding - going to the venture capital folks with independent verification will surely count for a lot. Granting agencies should also pay attention, and reward people accordingly.
Here's an article by Carl Zimmer with more on the idea. I'll be keeping a close eye on this myself, and hope to highlight some of the first studies to make it through the process. With any luck, this can become the New Normal for groundbreaking scientific results.
+ TrackBacks (0) | Category: General Scientific News | The Scientific Literature
I wrote here about Ampyra, the multiple sclerosis drug from Acorda Therapeutics, one that came close to the record for "simplest chemical matter in a marketed drug". (As it happens, Biogen Idec is making sure that it doesn't even have the title of "simplest drug for multiple sclerosis", and the shadow of valproic acid looms over this entire competition).
That post mentioned some doubts that had been expressed about how effective Ampyra is for its target: improving gait in MS patients. And now those doubts are increasing, because the company has been asked to conduct a trial of a lower 5 mg dose of the drug along with the approved 10 mg one (which was associated with seizures in some patients). And neither one of them met the primary endpoint. As that link shows, the company has several explanations - different endpoint than used before, higher placebo response than usual, wider variety of patients - but those are all ex post facto. Acorda wouldn't have set up the trial like this in the first place if they didn't think that the approved dose would work, and it didn't.
For a drug with a rather narrow symptomatic indication, that's not good news. And it comes as Acorda is still trying to get the compound approved in Europe. The cost/benefit ratio usually can't stand a big hit to the "benefit" term.
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs | The Central Nervous System
August 13, 2012
Here's a response from Prof. Light to my post the other day attacking his positions on drug research. I've taken it out of that comments thread to highlight it - he no longer has to wonder if I'll let people here read what he has to say.
I'll have a response as well, but that'll most likely be up tomorrow - I actually have a very busy day ahead of me in the lab, working on a target that (as far as any of us in my group can tell) no one has ever attacked, for a disease that (as far as any of us in my group can tell) no one has ever found a therapy. And no, I am not making that up.
It's hard to respond to so many sarcastic and baiting trashings by Dr. Lowe and some of his fan club, but let me try. I wonder if Dr. Lowe allows his followers to read what I write here without cutting and editing.
First, let me clarify some of the mis-representations about the new BMJ article that claims the innovation crisis is a myth. While the pharmaceutical industry and its global network of journalists have been writing that the industry has been in real trouble because innovation has been dropping, all those articles and figures are based on the decline of new molecules approved since a sharp spike. FDA figures make it clear that the so-called crisis has been simply a return to the long-term average. In fact, in recent years, companies have been getting above-average approvals for new molecules. Is there any reasonably argument with these FDA figures? I see none from Dr. Lowe or in the 15 pages of comments.
Second, the reported costs of R&D have been rising sharply, and we do not go into these; but here are a couple of points. We note that the big picture, total additional investments in R&D (which are self-reported from closely held figures) over the past 15 years were matched by six times greater increase in revenues. We can all guess various reasons why, but surely a 6-fold return is not a crisis or "unsustainable." In fact, it's evidence that companies know what they are doing.
Another point from international observers is that the costs of clinical trials in the U.S. are much higher than in equally affluent countries and much higher than they need to be, because everyone seems to make money the higher they are in the U.S. market. I have not looked into this but I think it would be interesting to see in what ways costly clinical trials are a boon for several of the stakeholders.
Third, regarding that infamously low cost of R&D that Dr. Lowe and readers like to slam, consider this: The low estimate is based on the same costs of R&D reported by companies (which are self-reported from closely held figures) to their leading policy research center as were used to estimate the average cost is $1.3 bn (and soon to be raised again). Doesn't that make you curious enough to want to find out how we show what inflators were used to ramp the reported costs up, which use to do the same in reverse? Would it be unfair to ask you to actually read how we took this inflationary estimate apart? Or is it easier just to say our estimate is "idiotic" and "absurd"? How about reading the whole argument at www.pharmamyths.net and then discuss its merits?
Our estimate is for net, median corporate cost of D(evelopment) for that same of drugs from the 1990s that the health economists supported by the industry used to ramp up the high estimate. Net, because taxpayer subsidies which the industry has fought hard to expand pay for about 44% of gross R&D costs. Median, because a few costly cases which are always featured raise the average artificially. Corporate, because a lot of R(eseach) and some D is paid for by others "“ governments, foundations, institutes. We don't include an estimate for R(eseach) because no one knows what it is and it varies so much from a chance discovery that costs almost nothing to years and decades of research, failures, dead ends, new angles, before finally an effective drug is discovered.
So it's an unknown and highly variable R plus more knowable estimate of net, median, corporate costs. Even then, companies never so show their books, and they never compare their costs of R&D to revenues and profits. They just keep telling us their unverifiable costs of R&D are astronomical.
We make clear that neither we nor anyone else knows either the average gross cost or the net, median costs of R&D because major companies have made sure we cannot. Further, the "average cost of R&D" estimate began in 1976 as a lobbying strategy to come up with an artificial number that could be used to wow Congressmen. It's worked wonderfully, mythic as it may be.
Current layoffs need to be considered (as do most things) from a 10-year perspective. A lot industry observers have commented on companies being "bloated" and adding too many hires. Besides trimming back to earlier numbers, the big companies increasingly realize (it has taken them years) that it's smarter to let thousands of biotechs and research teams try to find good new drugs, rather than doing it in-house. To regard those layoffs as an abandonment of research misconstrues the corporate strategies.
Fourth, we never use "me-too." We speak of minor variations, and we say it's clinically valuable to have 3-4 in a given therapeutic class, but marginal gains fall quite low after that.
Fifth, our main point about innovation is that current criteria for approval and incentives strongly reward companies doing exactly what they are doing, developing scores of minor variations to fill their sales lines and market for good profits. We don't see any conspiracy here, only rational economic behavior by smart businessmen.
But while all new drug products are better than placebo or not too worse than a comparator, often against surrogate end points, most of those prove to be little better than last year's "better" drugs, or the years before"¦ You can read detailed assessments by independent teams at several sites. Of course companies are delighted when new drugs are really better against clinical outcomes; but meantime we cite evidence that 80 percent of additional pharmaceutical costs go to buying newly patented minor variations. The rewards to do anything to get another cancer drug approved are so great that independent reviewers find few of them help patients much, and the area is corrupted by conflict-of-interest marketing.
So we conclude there is a "hidden business model" behind the much touted business model, to spend billions on R&D to discover breakthrough drugs that greatly improve health and works fine until the "patent cliff" sends the company crashing to the canyon floor. The heroic tale is true to some extent and sometimes; but the hidden business model is to develop minor variations and make solid profits from them. That sounds like rational economic behavior to me.
The trouble is, all these drugs are under-tested for risks of harm, and all drugs are toxic to one degree or another. My book, The Risks of Prescription Drugs, assembles evidence that there is an epidemic of harmful side effects, largely from hundreds of drugs with few or no advantages to offset their risks of harm.
Is that what we want? My neighbors want clinically better drugs. They think the FDA approves clinically better drugs and don't realize that's far from the case. Most folks think "innovation" means clinically superior, but it doesn't. Most new molecules do not prove to be clinically superior. The term "innovation" is used vaguely to signal better drugs for patients; but while many new drugs are technically innovative, they do not help patients much. The false rhetoric of "innovative" and "innovation" needs to be replaced by what we want and mean: "clinically superior drugs."
If we want clinically better drugs, why don't we ask for them and pay according to added value "“ no more if no better and a lot more if substantially better? Instead, standards for testing effectiveness and risk of harms is being lowered, and "“ guess what "“ that will reward still more minor variations by rational economic executives, not more truly superior "innovative" drugs.
I hope you find some of these points worthwhile and interesting. I'm trying to reply to 20 single-space pages of largely inaccurate criticism, often with no reasoned explanation for a given slur or dismissal. I hope we can do better than that. I thought the comments by Matt #27 and John Wayne #45 were particularly interesting.
Donald W. Light
+ TrackBacks (0) | Category: "Me Too" Drugs | Drug Development | Drug Prices
August 10, 2012
Here's some food for thought: in some cases, chemotherapy may actually accelerate the growth of tumor cells. This study has found that noncancerous cells (which are also affected, to various degrees, by most chemotherapy agents) can secrete Wnt16B in response to treatment, and that this protein is taken up by nearby tumor cells.
And that's not good. Here's the full paper, which looks at prostate cells. The secretion of the Wnt protein is mediated by NF-kappa-B in response to the DNA damage caused by many therapeutic agents, and it acts as a paracrine signal for the surrounding cells. And the resulting initiation of the Wnt pathway is bad news, because that's already been implicated in tumor cell biology. Here's the bad news slide: conditioned media from cultures of the normal prostate fibroblast cells, after exposure to therapeutic agents, causes prostate tumor cells to proliferate and become more mobile, an effect that can be canceled out by blocking Wnt16b.
Finding out that this can be set off by normal cells in the neighborhood means that we may need to do some rethinking about how chemotherapy is administered. But it would also seem to open a window to block Wnt signaling as an adjunct therapy. That's already been the subject of a good amount of research, since the importance to tumor biology was already known - here's a recent review. Development of these agents now looks more useful than ever. . .
+ TrackBacks (0) | Category: Cancer
August 9, 2012
The British Medical Journal says that the "widely touted innovation crisis in pharmaceuticals is a myth". The British Medical Journal is wrong.
There, that's about as direct as I can make it. But allow me to go into more detail, because that's not the the only thing they're wrong about. This is a new article entitled "Pharmaceutical research and development: what do we get for all that money?", and it's by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who's publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.
I'm generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they're in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I'll just note that it's hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.
Light and Lexchin's article makes much of Bernard Munos' work (which we talked about here), which shows a relatively constant rate of new drug discovery. They should go back and look at his graph, because they might notice that the slope of the line in recent years has not kept up with the historical rate. And they completely leave out one of the other key points that Munos makes: that even if the rate of discovery were to have remained linear, the costs associated with it sure as hell haven't. No, it's all a conspiracy:
"Meanwhile, telling "innovation crisis" stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition."
Ah, that must be why the industry has laid off thousands and thousands of people over the last few years: it's all a ploy to gain sympathy. We tell everyone else how hard it is to discover drugs, but when we're sure that there are no reporters or politicians around, we high-five each other at how successful our deception has been. Because that's our secret, according to Light and Lexchin. It's apparently not any harder to find something new and worthwhile, but we'd rather just sit on our rears and crank out "me-too" medications for the big bucks:
"This is the real innovation crisis: pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures. Although a steady stream of significantly superior drugs enlarges the medicine chest from which millions benefit, medicines have also produced an epidemic of serious adverse reactions that have added to national healthcare costs".
So let me get this straight: according to these folks, we mostly just make "minor variations", but the few really new drugs that come out aren't so great either, because of their "epidemic" of serious side effects. Let me advance an alternate set of explanations, one that I call, for lack of a better word, "reality". For one thing, "me-too" drugs are not identical, and their benefits are often overlooked by people who do not understand medicine. There are overcrowded therapeutic areas, but they're not common. The reason that some new drugs make only small advances on existing therapies is not because we like it that way, and it's especially not because we planned it that way. This happens because we try to make big advances, and we fail. Then we take what we can get.
No therapeutic area illustrates this better than oncology. Every new target in that field has come in with high hopes that this time we'll have something that really does the job. Angiogenesis inhibitors. Kinase inhibitors. Cell cycle disruptors. Microtubules, proteosomes, apoptosis, DNA repair, metabolic disruption of the Warburg effect. It goes on and on and on, and you know what? None of them work as well as we want them to. We take them into the clinic, give them to terrified people who have little hope left, and we watch as we provide with them, what? A few months of extra life? Was that what we were shooting for all along, do we grin and shake each others' hands when the results come in? "Another incremental advance! Rock and roll!"
Of course not. We're disappointed, and we're pissed off. But we don't know enough about cancer (yet) to do better, and cancer turns out to be a very hard condition to treat. It should also be noted that the financial incentives are there to discover something that really does pull people back from the edge of the grave, so you'd think that we money-grubbing, public-deceiving, expense-padding mercenaries might be attracted by that prospect. Apparently not.
The same goes for Alzheimer's disease. Just how much money has the industry spent over the last quarter of a century on Alzheimer's? I worked on it twenty years ago, and God knows that never came to anything. Look at the steady march, march, march of failure in the clinic - and keep in mind that these failures tend to come late in the game, during Phase III, and if you suggest to anyone in the business that you can run an Alzheimer's Phase III program and bring the whole thing in for $43 million dollars, you'll be invited to stop wasting everyone's time. Bapineuzumab's trials have surely cost several times that, and Pfizer/J&J are still pressing on. And before that you had Elan working on active immunization, which is still going on, and you have Lilly's other antibody, which is still going on, and Genentech's (which is still going on). No one has high hopes for any of these, but we're still burning piles of money to try to find something. And what about the secretase inhibitors? How much time and effort has gone into beta- and gamma-secretase? What did the folks at Lilly think when they took their inhibitor way into Phase III only to find out that it made Alzheimer's slightly worse instead of helping anyone? Didn't they realize that Professors Light and Lexchin were on to them? That they'd seen through the veil and figured out the real strategy of making tiny improvements on the existing drugs that attack the causes of Alzheimer's? What existing drugs to target the causes of Alzheimer are they talking about?
Honestly, I have trouble writing about this sort of thing, because I get too furious to be coherent. I've been doing this sort of work since 1989, and I have spent the great majority of my time working on diseases for which no good therapies existed. The rest of the time has been spent on new mechanisms, new classes of drugs that should (or should have) worked differently than the existing therapies. I cannot recall a time when I have worked on a real "me-too" drug of the sort of that Light and Lexchin seem to think the industry spends all its time on.
That's because of yet another factor they have not considered: simultaneous development. Take a look at that paragraph above, where I mentioned all those Alzheimer's therapies. Let's be wildly, crazily optimistic and pretend that bapineuzumab manages to eke out some sort of efficacy against Alzheimer's (which, by the way, would put it right into that "no real medical advance" category that Light and Lexchin make so much of). And let's throw caution out the third-floor window and pretend that Lilly's solanezumab actually does something, too. Not much - there's a limit to how optimistic a person can be without pharmacological assistance - but something, some actual efficacy. Now here's what you have to remember: according to people like the authors of this article, whichever of these antibodies that makes it though second is a "me-too" drug that offers only an incremental advance, if anything. Even though all this Alzheimer's work was started on a risk basis, in several different companies, with different antibodies developed in different ways, with no clue as to who (if anyone) might come out on top.
All right, now we get to another topic that articles like this latest one are simply not complete without. That's right, say it together: "Drug companies spend a lot more on marketing than they do on research!" Let's ignore, for the sake of argument, the large number of smaller companies that spend all of their money on R&D and none on marketing, because they have nothing to market yet. Let's even ignore the fact that over the years, the percentage of money being spent on drug R&D has actually been going up. No, let's instead go over this in a way that even professors at UMDNJ and York can understand it:
Company X spends, let's say, $10 a year on research. (We're lopping off a lot of zeros to make this easier). It has no revenues from selling drugs yet, and is burning through its cash while it tries to get its first on onto the market. It succeeds, and the new drug will bring in $100 dollars a year for the first two or three years, before the competition catches up with some of the incremental me-toos that everyone will switch to for mysterious reasons that apparently have nothing to do with anything working better. But I digress; let's get back to the key point. That $100 a year figure assumes that the company spends $30 a year on marketing (advertising, promotion, patient awareness, brand-building, all that stuff). If the company does not spend all that time and effort, the new drug will only bring in $60 a year, but that's pure profit. (We're going to ignore all the other costs, assuming that they're the same between the two cases).
So the company can bring in $60 dollars a year by doing no promotion, or it can bring in $70 a year after accounting for the expenses of marketing. The company will, of course, choose the latter. "But," you're saying, "what if all that marketing expense doesn't raise sales from $60 up to $100 a year?" Ah, then you are doing it wrong. The whole point, the raison d'etre of the marketing department is to bring in more money than they are spending. Marketing deals with the profitable side of the business; their job is to maximize those profits. If they spend more than those extra profits, well, it's time to fire them, isn't it?
R&D, on the other hand, is not the profitable side of the business. Far from it. We are black holes of finance: huge sums of money spiral in beyond our event horizons, emitting piteous cries and futile streams of braking radiation, and are never seen again. The point is, these are totally different parts of the company, doing totally different things. Complaining that the marketing budget is bigger than the R&D budget is like complaining that a car's passenger compartment is bigger than its gas tank, or that a ship's sail is bigger than its rudder.
OK, I've spend about enough time on this for one morning; I feel like I need a shower. Let's get on to the part where Light and Lexchin recommend what we should all be doing instead:
What can be done to change the business model of the pharmaceutical industry to focus on more cost effective, safer medicines? The first step should be to stop approving so many new drugs of little therapeutic value. . .We should also fully fund the EMA and other regulatory agencies with public funds, rather than relying on industry generated user fees, to end industry’s capture of its regulator. Finally, we should consider new ways of rewarding innovation directly, such as through the large cash prizes envisioned in US Senate Bill 1137, rather than through the high prices generated by patent protection. The bill proposes the collection of several billion dollars a year from all federal and non-federal health reimbursement and insurance programmes, and a committee would award prizes in proportion to how well new drugs fulfilled unmet clinical needs and constituted real therapeutic gains. Without patents new drugs are immediately open to generic competition, lowering prices, while at the same time innovators are rewarded quickly to innovate again. This approach would save countries billions in healthcare costs and produce real gains in people’s health.
One problem I have with this is that the health insurance industry would probably object to having "several billion dollars a year" collected from it. And that "several" would not mean "two or three", for sure. But even if we extract that cash somehow - an extraction that would surely raise health insurance costs as it got passed along - we now find ourselves depending on a committee that will determine the worth of each new drug. Will these people determine that when the drug is approved, or will they need to wait a few years to see how it does in the real world? If the drug under- or overperforms, does the reward get adjusted accordingly? How, exactly, do we decide how much a diabetes drug is worth compared to one for multiple sclerosis, or TB? What about a drug that doesn't help many people, but helps them tremendously, versus a drug that's taken by a lot of people, but has only milder improvements for them? What if a drug is worth a lot more to people in one demographic versus another? And what happens as various advocacy groups lobby to get their diseases moved further up the list of important ones that deserve higher prizes and more incentives?
These will have to be some very, very wise and prudent people on this committee. You certainly wouldn't want anyone who's ever been involved with the drug industry on there, no indeed. And you wouldn't want any politicians - why, they might use that influential position to do who knows what. No, you'd want honest, intelligent, reliable people, who know a tremendous amount about medical care and pharmaceuticals, but have no financial or personal interests involved. I'm sure there are plenty of them out there, somewhere. And when we find them, why stop with drugs? Why not set up committees to determine the true worth of the other vital things that people in this country need each day - food, transportation, consumer goods? Surely this model can be extended; it all sounds so rational. I doubt if anything like it has ever been tried before, and it's certainly a lot better than the grubby business of deciding prices and values based on what people will pay for things (what do they know, anyway, compared to a panel of dispassionate experts?)
Enough. I should mention that when Prof. Light's earlier figure for drug expense came out that I had a brief correspondence with him, and I invited him to come to this site and try out his reasoning on people who develop drugs for a living. Communication seemed to dry up after that, I have to report. But that offer is still open. Reading his publications makes me think that he (and his co-authors) have never actually spoken with anyone who does this work or has any actual experience with it. Come on down, I say! We're real people, just like you. OK, we're more evil, fine. But otherwise. . .
+ TrackBacks (0) | Category: "Me Too" Drugs | Business and Markets | Cancer | Drug Development | Drug Industry History | Drug Prices | The Central Nervous System | Why Everyone Loves Us
August 8, 2012
There doesn't seem to be any mention of it on their web site, but the Neurosciences Institute in San Diego/La Jolla may be closing up shop. A reader received an e-mail about a lab equipment auction at that address, and sure enough, there's an "online lab liquidation auction" being held by BioSurplus. They mention in passing that ". . .The Institute is now shutting down its lab spaces", but I can't find any other mention of what's happened.
+ TrackBacks (0) | Category: The Central Nervous System
What makes a cancer drug effective? What if it stops cancer from spreading when you give it to patients - is that effective, or not? This topic has come up around here before, but there may be a rather stark example of it unfolding with Aveo Pharmaceuticals and their drug tivozanib.
Earlier this year, the company announced results of a trial in renal cell carcinoma of their drug versus the Bayer/Onyx drug Nexavar (sorafenib), which is the standard of care. It's not like Nexavar does a great job in that indication, though - when it was going through clinical trials, it ran in RCC patients versus placebo, since - you guessed it - placebo was the standard of care at the time. And while Nexavar did show a benefit under those conditions, there are still plenty of patients that don't respond. Thus tivozanib, and its window of opportunity. The compound itself is in the same broad chemical class (bi-aryl ureas) as sorafenib.
The Phase III results for the Aveo drug showed an improvement in progression-free survival - tracking the time it takes for the cancer to start spreading again. But progression-free survival does not necessarily mean "survival", not in the sense that cancer patients and their relatives really care about. Dying in the same amount of time, albeit with redistributed tumor tissue, is not the endpoint that people are waiting for.
The company is, of course, monitoring the patients that it's treated. And there's the problem: the current data show, after one year, that 77% of the tivozanib-treated patients are still alive. But 81% of the sorafenib patients have survived, and the FDA has officially expressed concern about the way things are going. That sent Aveo's stock down sharply the other day, as well it might. But there could be a way out:
Aveo said in today’s statement that basically it’s possible the preliminary survival data could be misleading. That’s because in cancer trials like this one, cancer patients whose disease worsens on one drug can then go on to get a second drug which may help them. In this case, Aveo said 53 percent of the patients who were randomly assigned to get the Bayer/Onyx drug went on to get subsequent therapy after their disease worsened—and “nearly all” of them were given Aveo’s tivozanib. By contrast, only 17 percent of the patients who were randomly assigned to initially get the Aveo drug went on to get a subsequent therapy. So it’s possible that the patients in the Bayer/Onyx control group may be ending up living longer at least partly because of the Aveo drug they got later on.
We'll have to wait for more data to sort all this out. Until that point, Aveo (and its shareholders) are probably in for a bumpy ride. But it's worth remembering that renal cell carcinoma patients are having a rather harder time of it than anyone else in this story, and they're the people who will be watching this most closely of all. . .
+ TrackBacks (0) | Category: Cancer | Clinical Trials
August 7, 2012
As expected (by all but the most relentlessly optimistic observers), the anti-Alzheimers antibody bapineuzumab has now failed in its most likely patient population. Results came out last night about from patients who do not carry the ApoE4 mutation, the only group that seemed to offer hope in earlier clinical trials. The therapy missed its endpoints versus placebo, and according to Pharmalot, subgroup analysis offered no hope that there was some further fraction of patients that might be responding. (You would have had to have been a pretty hardy investor to carry on even if something had shown up).
But apparently Pfizer and J&J are those hardy investors, because (as that link shows), they're apparently going on with an already-in-progress Phase II study of the antibody dosed subcutaneously. That baffles me - I don't know enough about antibody dosing to say if that makes a difference, but it seems odd to think that it would. And clinical work on another active immunization therapy is going on as well (as opposed to dosing a pre-made antibody).
Good luck to them on that - I mean that sincerely, because the Alzheimer's field needs any successes it can find. The immunological approach has been a long and hard one, and hasn't delivered much encouragement so far. But on the other hand, it's immunology, which means that it's still a wild black box in many ways and capable of all kinds of unexpected results. But that said, it's still hard to imagine that Eli Lilly's competing antibody solanezumab has much chance of working, at this point. We'll hear about that one soon, and I very much expect to be using the phrase "missed endpoints" again. I might be using the phrase "subgroup analysis", though, in which case the phrase "more money" will also make an appearance.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
Courtesy of a reader in the UK, here's an ad from GlaxoSmithKline that I don't think has been seen much on this side of the Atlantic. I hadn't realized that they were involved in the drug testing for the London games; it's interesting that their public relations folks feel that it's worth highlighting. They're almost certainly right - I think one of the major objections people have when they hear of a case of athletic doping is a violation of the spirit of fair play.
But one can certainly see the hands of the advertising people at work. The napthyl rings for the double-O of "blood" are a nice touch, but the rest of the "chemistry" is complete nonsense. Update: it's such complete nonsense that they have the double bonds in the napthyl banging into each other, which I hadn't even noticed at first. Is it still a "Texas Carbon" when it's from London? In fact, it's so far off that it took me a minute of looking at the image to realize that the reason things were written so oddly was that the words were supposed to be more parts of a chemical formula. It's that wrong - the chemical equivalent of one of those meaningless Oriental language tattoos.
But as in the case of the tattoos, it probably gets its message across to people who've never been exposed to any of the actual symbols and syntax. I'd be interested to know if this typography immediately says "Chemistry!" to people who don't know any. I don't have many good opportunities to test that, though - everyone around me during the day knows the lingo!
+ TrackBacks (0) | Category: Analytical Chemistry | General Scientific News
August 6, 2012
You may well recall the startling results of a modified T-cell therapy against leukemia that were reported one year ago. I'm happy to report that Novartis is investing in this technology and putting their own considerable amount of development expertise into making it work on a larger scale:
“I never thought this would happen, that the pharma industry would get into ultra-personalized therapy,” (Penn scientist Carl) June said in a telephone interview. “We had lots of venture capital interest, but it’s hard to be a new company and it takes time to get set up. The fastest route to widespread availability is to use an existing company.”
Novartis was one of three companies to negotiate with the university, according to June, who declined to name the other two. Novartis was selected in part because of its experience with Gleevac, a drug used to treat chronic myeloid leukemia. . .
. . .June’s group is now treating 1 patient a week, he said. The Novartis collaboration will help more people get treatment, said June, who is a professor of pathology and laboratory medicine at the University’s Abramson Cancer Center.
In addition to further trials in leukemia, the UPenn group has also engineered trials for lymphoma, mesothelioma, myeloma, and neuroblastoma.
Ah, but in oncology, it's probably going to be the case that every patient will be the subject of personalized therapy, to some degree. So the interest from the big companies makes a lot of sense. Good luck to Novartis and the Penn team - this work has tremendous potential, and I'm very glad to see the funding and manpower come in to investigate it.
+ TrackBacks (0) | Category: Cancer
How important is it to have a "anchor" company in a regional bio/pharma cluster? How do you get a thriving cluster of biotech companies, anyway? There are a lot of cities that would like the answers to these questions, not that anyone has them (although there are consultants who will be glad to convince you otherwise).
Luke Timmerman has thoughts on the subject here, pointing out that some of the more well-known biotech hubs have been losing some of their marquee names to takeovers and the like. This has to have an effect, and the question is just how big (or bad) it'll be.
Other companies, in some places, might be able to step up and fill the void, but not always. If there isn't a robust culture in an area (or not yet), then taking out the main company that's driving things might bring the whole process to a halt. If, in fact, it is a process - and that takes us back to the whole question of how these clusters get started in the first place. The biggest and most impressive share some common features (well--known research universities in the area, to pick the most obvious), but what seem to be very similar features in other locations can fail to produce similar results.
Many are the cities that have tried to grow their own Silicon Valleys and Boston/Cambridges. Overall, I'm skeptical of attempts to purposefully induce these sorts of things, and that goes both for R&D clusters as well as the various city-planning attempts to bring in young creative-class types. At best, these seem to me to be likely to missing some key variables, and at worst, it's reminiscent of South Pacific cargo cults. ("If we make this look like a happening city, then that's what it'll be!") It's not that I don't think more research hot spots would be a bad thing, of course - just the opposite. It's just that I don't know how you achieve that result.
+ TrackBacks (0) | Category: Who Discovers and Why
I was up late last night, watching the folks at JPL celebrate the landing of the Mars Science Laboratory, Curiosity. (And needless to say, I was glad to see that the elaborate landing technology worked so well, as opposed to the back-up technique of "lithobraking", which is reliable but a bit hard on the equipment). I'm looking forward to seeing updates on Martian chemistry for the next few years.
And since we are well into the 21st century, it's only fitting and proper that we have a laser-firing, nuclear-powered robot rolling around on Mars. On to Europa, Titan, and Enceladus!
+ TrackBacks (0) | Category: General Scientific News
August 3, 2012
I'm an unabashed fan of phenotypic screening. (For those outside the field, that means screening for compounds by looking for their effects on living systems, rather than starting with a molecular target and working your way up). Done right, I don't think that there's a better platform for breakthrough drug discovery, mainly because there's so much we don't know about what really goes on in cells and in whole organisms.
Doing it right isn't easy, though, nor will you necessarily find anything even if you do. But there's a recent paper in Nature that is, I think, a model of the sort of thing that we should all be thinking about. A collaboration between the Shokat group at UCSF and the Cagan group at Mt. Sinai, this project is deliberately looking for one at the trickiest aspects of drug discovery: polypharmacology. "One target, one drug" is all very well, but what if your drugs hit more than one target (as they generally do?) Or what if your patients will only be served by hitting more than one target (as many diseases, especially cancer, call for)? The complexities get out of control very quickly, and model systems would be very helpful indeed.
This work goes all the way back to fruit flies, good ol' Drosophila, and the authors picked a well-characterized cancer pathway: multiple endocrine neoplasia type 2 (MEN2). This is known to be driven by gain-of-function mutations in the Ret pathway, and patients with such mutations show a greatly increased rate of endocrine tumors (thyroid, especially). Ret is a receptor tyrosine kinase, and the receptor is one that recognizes the GDNF family of signaling peptides. As oncology pathways go, this one is fairly well worked out, not that it's led to any selective Ret inhibitor drugs so far (although many have tried and are trying).
Using this Ret-driven fly model, the teams ran a wide variety of kinase inhibitor molecules past the insects, looking for their effects, while at the same time profiling the compounds across a long list of kinase enzymes. This gives you a chance to do something that you don't often get a chance to do: match one kind of fingerprint to another kind. And what they found was that you needed "balanced polypharmacology" to get optimal phenotypic effects. The compounds that inhibited the Drosophila equivalents of Ret, Raf, Src and S6K all at the same time made the flies survive the longest. That's quite a blunderbuss list. But some very similar compounds weren't as good, and that turned out to be due to the activity on Tor. Working these combinations out was not trivial - it took a lot of different strains of flies with different levels of kinase activity, and a lot of different compounds with varying profiles.
Now, these kinases cover an awful lot of ground, as you'll know if you've worked in the field, or if you just click on those links and look at some of the pathway diagrams. There is, I think it's fair to say, no way that anyone could have identified these particular combinations with certainly without running the experiment in a real system; there are just too many branching, intersecting, ramifications to get a clear picture of what would happen. Thus, phenotypic screening: let the real system tell you.
So, you may be thinking, fruit flies. Great. Does that tell us anything real? In this case, it looks like it does. The compound profiles that were seen in the model system translated to human cell lines, and to mouse xenograft models. And while neither of those is a perfect indicator (far from it), they're about the best we have, and many are the compounds that have gone into human trials with just such data.
I look forward to more applications of this technique, to see how far it can be pushed. Ret looks like a well-chosen test case - what happens when you go on to even trickier ones? It won't be easy, but being able to unravel some of the polypharmacology when you're still back at the fruit-fly stage will be worth the effort.
+ TrackBacks (0) | Category: Cancer | Drug Assays
August 2, 2012
Here's a useful overview of the public-domain medicinal chemistry databases out there. It covers the big three databases in detail:
BindingDB (quantitative binding data to protein targets).
ChEMBL (wide range of med-chem data, overlaps a bit with PubChem).
PubChem (data from NIH Roadmap screen and many others).
And these others:
Binding MOAD (literature-annotated PDB data).
ChemSpider (26 million compounds from hundreds of data sources).
DrugBank (data on 6700 known drugs).
GRAC and IUPHAR-DB (data on GPCRs, ion channels, and nuclear receptors, and ligands for all of these).
PDBbind (more annotated PDB data).
PDSP Ki (data from UNC's psychoactive drug screening program)
SuperTarget (target-compound interaction database).
Therapeutic Targets Database(database of known and possible drug targets).
ZINC (21 million commercially available compounds, organized by class, downloadable in various formats).
There is the irony of a detail article on public-domain databases appearing behind the ACS paywall, but the literature is full of such moments as that. . .
+ TrackBacks (0) | Category: Biological News | Chemical News | Drug Assays
August 1, 2012
There have been a number of odd developments in the Sheri Sangji case, the lab fatality at UCLA that led to criminal charges being filed against both Prof. Patrick Harran and the university. The Doing Good Science blog at Scientific American has a thorough round-up of the latest.
Last Friday, charges were dropped against UCLA, and the case against Harran was separated. As part of the deal, UCLA agreed to establish a memorial scholarship and to improve its safety measures. We're still finding out about those, but Chemjobber has more details here and here.
I note that this seems to be a process-heavy, paperwork-heavy system that's going into place. And while that might help, I'm going to remain skeptical, since I've worked under similar conditions, and it did not stop some people from have lab accidents that they shouldn't have had. Now, there's no way of knowing how many accidents these policies prevented, but the ones that got through were just the sorts of things that this safety regime was designed to prevent. So one does have to wonder. It's a natural impulse to think that process improvements are the answer to such situations, though. It's especially going to come out that way when you have lawyers watching who will want to see measurable, quantifiable steps being taken. It's not possible to measure how many people avoided lab hazards as a result of your safety measures - but it is possible to count how many meetings people have to attend, how many standard operating procedures they have to generate and sign off on, and so on.
Now as far as Prof. Harran's case, here's where things get weird. His legal team appears to be attacking the California OSHA report on the lab incident by pointing out that its author was involved in a murder plot as a teenager and lied about it to investigators (falsus in uno. . .). After some confusion about whether this was even the same person, word is now that the investigator has suddenly resigned from his position as a public safety commissioner. So perhaps Harran's lawyers are on to something.
But on to what, exactly? I can understand this as a legal tactic. It's not a very honorable one, but it's been said that lawyers will do anything for you that can be done while wearing a nice suit (that's their ethical boundary). Their job is to exonerate their client, and they will do pretty much anything that leads to that result. Will this? Doubt can be cast on the personal history of the Cal OSHA investigator, for sure, but can it be cast on the report that he wrote? Chemjobber's guess is that this indicates that plea bargaining isn't going well, and that seems quite believable to me.
So that's the state of things now. There will be more, perhaps a lot more, the way this case is going so far. Whether it all will lead to a just outcome depends on what you think justice is, and how it might be served here. And whether any of it will keep someone in the future from being killed by a dangerous reagent that they did not appear ready to use, I have no idea. The longer all this goes on, and the more convoluted it gets, the more I wonder about that.
+ TrackBacks (0) | Category: Safety Warnings
I'm told that BMS is cutting scientific staff today in New Jersey as they refocus some of their therapeutic areas. More details as I get them (or in the comments below).
Update: there are cuts in the metabolic disease area. The company apparently feels that traditional diabetes drug discovery has become a challenging area, both because it's become increasingly well-served and because the regulatory/clinical path has become much more difficult in recent years. . .
Update #2: the company has now confirmed that it has eliminated "fewer than 100" positions, but is giving no further details. In lieu of those, it has chosen, like so many other organizations, to inform the world that it ". . .is strategically evolving the company’s Research focus to ensure the delivery of a sustainable, innovative drug pipeline in areas of serious unmet medical need and potential commercial growth," and that it "is aligning and building internal capabilities. . ." In case you were wondering.
+ TrackBacks (0) | Category: Business and Markets