About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
September 15, 2014
It's time for a hang-heads-in-shame moment. This is another off the Twitter feed, and the only place to see the figure in its native state is to go the the Chemical Reviews table of contents and scroll down to the article titled "Aqueous Rechargable Li and Na Ion Batteries". A perfectly reasonable topic, but take a look at the graphical abstract figure. Oh, dear.
+ TrackBacks (0) | Category: The Scientific Literature
Last year I mentioned a paper that described the well-known drug tramadol as a natural product, isolated from a species of tree in Cameroon. Rather high concentrations were found in the root bark, and the evidence looked solid that the compound was indeed being made biochemically.
Well, thanks to chem-blogger Quintus (and a mention on Twitter by See Arr Oh), I've learned that this story has taken a very surprising turn. This new paper in Ang. Chem. investigates the situation more closely. And you can indeed extract tramadol from the stated species - there's no doubt about it. You can extract three of its major metabolites, too - its three major mammalian metabolites. That's because, as it turns out, tramadol is given extensively to cattle (!) in the region, so much of it that the parent drug and its metabolites have soaked into the soil enough for the African peach/pincushion tree to have taken it up into its roots. I didn't see that one coming.
The farmers apparently take the drug themselves, at pretty high dosages, saying that it allows them to work without getting tiree. Who decided it would be a good thing to feed to the cows, no one knows, but the farmers feel that it benefits them, too. So in that specific region in the north of Cameroon, tramadol contamination in the farming areas has built up to the point that you can extract the stuff from tree roots. Good grief. In southern Cameroon, the concentrations are orders of magnitude lower, and neither the farmers nor the cattle have adopted the tramadol-soaked lifestyle. Natural products chemistry is getting trickier all the time.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Natural Products
September 12, 2014
Well, it was not a dull evening around the In the Pipeline headquarters last night. I submitted a link to Reddit for my post yesterday about Retrophin and Thiola, and that blew up onto that site's front page. The Corante server melted under the impact, which isn't too surprising, since it's struggling at the best of times. (A site move really is coming, and no, I can't wait, either, at this point.)
But then, to my great surprise, Martin Shkreli (CEO of Retrophin) showed up in the Reddit thread, doing an impromptu AMA (Ask Me Anything), which I have to say takes quite a bit of aplomb (or perhaps foolhardiness - I don't think too many other CEOs of any publicly traded corporations would have done it). But not too long after that, the entire thread vanished off the front page, and off of r/News, the subreddit where I'd submitted it.
Then I got a message from one of the moderators of r/News, saying that I'd been banned from it, and going on to say that I would likely be banned from the site as a whole. After having been on Reddit for seven years, that took me by surprise. As best I can figure, the thread itself was reported to r/Spam by someone, and the automated system took over from there. Over the years, I've submitted links to my blog posts, and Reddit, or some parts of it, anyway, has been notoriously touchy about that. The last time I submitted such a link, though, was back in February (and before that, August of 2013), so I'm not exactly a human spam-bot. We'll see what happens. Update: I was banned for some hours, but I've been reinstated.
But back to Retrophin, Thiola, and Martin Shrkeli. The entire Reddit thread can still be read here, via a direct link, although it can't be found in r/News any more. If you look for a user named "martinshkreli", you can see where he gets into the fray (I'm "dblowe" on the site, or perhaps I was?). You'll note that he gives out his cell phone, office phone, and e-mail, which again is not your usual CEO move - you have to give him that, although it does seem a bit problematic from a regulatory/compliance angle. So what arguments does he make for the Thiola price increase?
From what I can see, they boil down to this: patients themselves aren't going to be paying this increased price - insurance companies are. And Retrophin is actually going to be working on new formulations for the drug, which no one has done previously. He seems to have implied that the previous company (Mission Pharmacal) was reluctant to raise the price and take the public outcry, and stated (correctly) that Mission was having trouble keeping the drug in supply. He claims that the current price is still "pretty low", and that he does not expect any pushback from the eventual payers. There was also quite a bit about the company's dedication to patients, their work on other rare diseases, and so on.
He and I didn't cross paths much in the thread. I tried asking a few direct questions, but they weren't picked up on, so my take on Shkreli's answers will show up here. He's correct that the drug's availability was erratic, and he may well be correct that its price was too low for a company to deal with it properly. But if so, that does make you wonder what Mission Pharmacal was up to, and how they were sourcing the material.
He's also correct that Retrophin is planning to work on new formulations of the drug. But when you look at the company's investor presentation about Thiola, all that comes under a slide marked "Distribution and Intellectual Property". The plan seems to be that they'll introduce 250mg and 500mg dosages, at which time they'll discontinue the current 100mg formulation. Later on, they'll try to introduce a time-release formulation, at which time they'll discontinue the 250mg and 500mg forms. You can argue that this is helping patients, but you can also argue that it's making it as difficult as possible for anyone else to show bioequivalence and enter the market as well, assuming that anyone wants to.
And as I mentioned yesterday, the company does seem to care about someone else entering the market. My questions to Shkreli about the "closed distribution" model mentioned on the company's slides went unanswered, but the only interpretation I can give them is that Retrophin plans to use the FDA's risk management system to deny any competitors access to their formulations, in order to try to keep themselves as the sole supplier of Thiola in perpetuity. Patents at least expire: regulatory rent-seeking is forever.
Also left out of Shkreli's comments on Reddit are the issues on the company's slide titled "Pharmacoeconomics", where it says (vis-a-vis the other drug for cystinuria, penicillamine):
Current pricing of Thiola® - $4,000 PPPY
– Penicillamine pricing- $80,000-$140,000
• Thiola could support a significant price increase
Personally, I think that's the main reason for Retrophin's interest. You'll note that the price hike takes Thiola's cost right up to the penicillamine region (the price of that one is another story all its own). But to a first approximation, that's business. I've defended some drug company pricing decisions on this site before (although not all of them), so what's different this time?
I've been thinking hard about that one, and here's what I have so far. I think that pricing power of this sort is a powerful weapon. That's the reason for the patent system - you get a monopoly on selling your invention, but it's for a fixed period only, and in return you disclose what your invention is so that others can learn from it. And I think that this sort of pricing power should be a reward for actually producing an invention. That's the incentive for going through all the work and expense, the (time-limited) pot of gold at the end of the rainbow. I have a much lower opinion of seeing someone ram through a big price increase just because, hey, they can. Thiola has nothing to do with the patent system - it's off patent. What this situation looks like to me is regulatory rent-seeking. Celgene seems to be doing that too, with thalidomide (as mentioned yesterday), which is why they're being taken to court. Retrophin is betting that Thiola just isn't a big enough deal for anyone to go to that trouble, once they tell them to buzz off by using Celgene's strategy.
Businesses can, though, charge what they think the market will bear, and Retrophin's contribution to cystinuria therapy so far is to have realized that the market will bear a lot more than people had realized. But in an actual market, it would be easier for someone else to come in and compete on price. What Retrophin is planning is to use regulatory loopholes to keep anyone else from doing so, with no time limit until someone at the FDA does something about it. Cloaking this in a lot of noble-sounding talk about being the company that really cares about cystinuria patients is a bit stomach-turning. In my opinion.
+ TrackBacks (0) | Category: Business and Markets | Drug Prices | Why Everyone Loves Us
September 11, 2014
There's a drug called Thiola (tiopronin) that most people have never heard of. It's on my list of "smaller than aspirin" drugs, and I'd never heard of it until I put that one together. But thanks to a little company called Retrophin, we all get to hear about it now.
It's used to treat cystinuria, a rare disease that causes painful kidney complications, namely unusual kidney stones of pure cystine. And until recently, tiopronin (as a small, nearly forgotten drug for an orphan indication) was rather cheap. It was sold by a small company in Texas, Mission Pharmacal, until Retrophin bought the marketing rights earlier this year (a move complicated by the company's CEO, investor Martin Shkreli, who may have let the news of the deal leak on his Twitter account).
That link mentions part of Shkreli's business plan as "acquiring the rights to obsolete remedies Shkreli says can be put to new and lucrative purposes", and by gosh, that's certainly accurate. Retrophin is increasing the price of Thiola from $1.50 per pill to over $30 per pill. Because they can - they stated when they bought the drug that their first move would be to raise the price. New dosages are formulations are also mentioned, but the first thing is to jack the price up as high as it can be jacked. Note that patients take several pills per day. Shkreli is probably chortling at those Mission Pharmacal hicks who didn't realize what a gold mine they were sitting on.
Now, there have been somewhat similar cases in recent years. Colchicine's price went straight up, and (infamously) so did the progesterone formulation marketed as Makena. But in both those cases, the small companies involved took the compound back through the FDA, under an agency-approved program to get marketing exclusivity. I've argued here (see those last two links) that this idea has backfired several times, and that the benefit from the clinical re-evaluation and re-approval of these drugs has not been worth their vastly increased cost. I think that drug companies should be able to set the price of their drugs, because they have a lot of failures to make up for, but this FDA loophole gives people a chance to do minimal development at minimal risk and be handed a license to print money in return.
But this isn't even one of those cases. It's worse. Retrophin hasn't done any new trials, and they haven't had to. They've just bought someone else's old drug that they believed could be sold for twenty times its price, and have put that plan right into action. No development costs, no risks whatsoever - just slap a new sticker on it and put your hands over your ears. This is exactly the sort of thing that makes people go into fist-clenching rages about the drug industry, and with damn good reason. This one enrages me, and I do drug research for a living.
So thank you, Martin Shkreli. You've accelerated the progress of the giant hammer that's coming down on on all of us over drug pricing, and helped drag the reputation of the pharmaceutical industry even further into the swamp. But what the hell do you care, right? You're going to be raking in the cash. The only thing I can say about Shkreli and Retrophin is that they make the rest of the industry look good in comparison. Some comparison.
Update: There are some interesting IP aspects to this situation. As pointed out in the comments section, this compound has no exclusivity left and is off patent. So what's to stop someone else from filing an ANDA, showing bioequivalence, and competing on price (since there seems to be an awful lot of room in there)?
Simon Lackner on Twitter sent me to this presentation from Retrophin on their purchase of the Thiola license. In it, you can see that their plan for this: "Similar to Chenodal, Retrophin will move Thiola into closed distribution". Chenodal was the company's previous brainstorm of this sort, when they bought Manchester Pharmaceuticals, details of which can be seen on this presentation. What they say on that one is "Closed distribution system does not allow for generics to access product for bioequivalence study. ANDA filings are impossible unless generic company illegally penetrates specialty distributor. Recent Celgene v. Lannett case establishes precedent." So let's go back and take a look at Celgene v. Lannett.
That was a long-running dispute between the two companies over Lannett's desire to market a generic equivalent of Celgene's thalidomide. Lannett brought suit, accusing Celgene of using the drug's Risk Evaluation and Mitigation Strategy (REMS) improperly to deny potential competitors access to their product (which is needed to do a head-to-head comparison for an ANDA filing). As you can imagine, the REMS for thalidomide is pretty extensive and detailed! But there was no court decision in the case. The companies reached an out-of-court settlement before it went to trial in 2012, although I have to say that that Retrophin slide makes it sound like there's some sort of legal precedent that was set. There wasn't. The limits of REMS restrictions to deny access to a given drug are still an open question.
In late 2012, Acetelion and Apotex went at it over the same issue, this time over access to Tracleer (bosentan). The Federal Trade Commission filed an amicus brief, warning that companies could be abusing the REMS process to keep out competition. That case was also dismissed, though, after the two companies reached an out-of-court settlement of their own, removing another chance for a legal opinion on the subject.
But the issue is very much alive. Earlier this year, Mylan went after Celgene, also over thalidomide (and its follow-up, lenalidomide). Their complaint:
Celgene, a branded drug manufacturer, has used REMS as a pretext to prevent Mylan from acquiring the necessary samples to conduct bioequivalence studies, even after the FDA determined that Mylan’s safety protocols were acceptable to conduct those studies. In furtherance of its scheme to monopolize and restrain trade, Celgene implemented certain distribution restrictions that significantly limit drug product availability.
And this is the plan that Retrophin has in mind - they say so quite clearly in those two presentations linked above. What their presentations don't go into is that this strategy has been under constant legal attack. It also doesn't go into another issue: the use of REMS at all. Thalidomide, of course, is under all kinds of restrictions and has plenty of hideous risks to manage. Bosentan's not exactly powdered drink mix, either - patients require monthly liver function tests (risk of hepatoxicity) and monitoring of their hematocrit (risk of anemia). But what about Thiola/tiopronin? It's not under any risk management restrictions that I can see. Its side effects seem to be mainly diarrhea and nausea, which does not put it into the "This drug is so dangerous that we can't let any generic company get ahold of our pills" category. So how is Retrophin going to make this maneuver work?
Update: more on this issue here.
+ TrackBacks (0) | Category: Business and Markets | Drug Prices | Why Everyone Loves Us
September 10, 2014
Bizarre news from Evotec - see what you make of this press release:
Evotec AG was informed that US company Hyperion Therapeutics, Inc. ("Hyperion") is terminating the development of DiaPep277(R) for newly diagnosed Type 1 diabetes.
In a press release published by Hyperion on 08 September 2014 at market opening in the US, the company states that it has uncovered evidence that certain employees of Andromeda Biotech, Ltd. ("Andromeda"), which Hyperion acquired in June 2014, engaged in serious misconduct, involved with the trial data of DiaPep277. Hyperion announced that it will complete the DIA-AID 2 Phase 3 trial, but will terminate further development in DiaPep277.
Here's the Hyperion press release, and it details a terrible mess:
The company has uncovered evidence that certain employees of Andromeda Biotech, Ltd., which Hyperion acquired in June 2014, engaged in serious misconduct, including collusion with a third-party biostatistics firm in Israel to improperly receive un-blinded DIA-AID 1 trial data and to use such data in order to manipulate the analyses to obtain a favorable result. Additional evidence indicates that the biostatistics firm and certain Andromeda employees continued the improper practice of sharing and examining un-blinded data from the ongoing DIA-AID 2 trial. All of these acts were concealed from Hyperion and others.
The Company has suspended the Andromeda employees known to be involved, is notifying relevant regulatory authorities, and continues to investigate in order to explore its legal options. Hyperion employees were not involved in any of the improper conduct.
What a nightmare. All biomedical data are vulnerable to outright fraud, and it gives a person the shivers just thinking about it. I can only imagine the reactions of Hyperion's management when they heard about this, and Evotec's when Hyperion told them about it. What, exactly, the Andromeda people (and the third-party biostatistics people) thought they were getting out of this is an interesting question, too - did they hope to profit if the company announced positive results? That's my best guess, but I'm not sleazy enough (I hope) to think these things through properly.
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials | The Dark Side
I'd seen various solventless reactions between solid-phase components over the years, but never tried one until now. And I have to say, I'm surprised and impressed. I can't quite say which literature reference I'm following, unfortunately, because it might conceivably give someone a lead on what I'm making at the moment, but it's a reference that I found as a new technique for an old reaction. Doing it in solution gives you a mess, but just grinding up the two solid reactants and the reagent, in a mortar and pestle, gives you a very clean conversion. The stuff turns into a sort of ugly clay inside the mortar, but it looks are deceiving. I feel like an alchemist. Consider me a convert to the solventless lifestyle - I'll try this again on some other reaction classes when I get the chance. Anyone else ever ground up some solids and made a new product?
+ TrackBacks (0) | Category: Life in the Drug Labs
Retraction Watch has a rare look behind the peer review curtain in the (now notorious) case of the STAP stem cell controversy. This was the publication that claimed that stem-like cells could be produced by simple acid treatment, and this work has since been shown to be fraudulent. Damaged reputations, bitter accusations, and one suicide have been the result so far, and there are still bent hubcaps wobbling around on the asphalt.
The work was published in Nature, but it had been rejected from Science and Cell before finding a home there. That's not unusual in itself - a lot of groundbreaking work has had a surprisingly difficult time getting published. But the kinds of referee reports this got were detailed, well-argued, and strongly critical, which makes you wonder what Nature's reviewers said, and how the work got published in the form it did, with most (all?) of the troublesome stuff left in.
Retraction Watch has obtained the complete text of the referee comments from the Science submission process and published them. Here are some highlights from just the first reviewer:
. . .This is such an extraordinary claim that a very high level of proof is required to sustain it and I do not think this level has been reached. I suspect that the results are artifacts derived from the following processes: (1) the tendency of cells with GFP reporters to go green as they are dying. (2) the ease of cross contamination of cell lines kept in the same lab. . .
. . .The DNA analysis of the chimeric mice is the only piece of data that does not fit with the contamination theory. But the DNA fragments in the chimeras don’t look the same as those in the lymphocytes. This assay is not properly explained. If it is just an agarose gel then the small bands could be anything. Moreover this figure has been reconstructed. It is normal practice to insert thin white lines between lanes taken from different gels (lanes 3 and 6 are spliced in). Also I find the leading edge of the GL band suspiciously sharp in #2-#5. . .
This report and the other two go on to raise a long list of detailed, well-informed criticisms about the experimental design of the work and the amount of information provided. Solutions and reagents are not described in enough detail, images of the cells don't quite show what they're supposed to be showing, and numerous useful controls and normalizations are missing outright. The referees in this case were clearly very familiar with stem cell protocols and behavior, and they did exactly what they were supposed to do with a paper whose claims were as extraordinary as these were.
Had any of this stuff been real, meeting the objections of the reviewers would have been possible, and would have significantly improved the resulting paper. This process, in fact, handed the authors a list of exactly the sorts of objections that the scientific community would raise once the paper did get published. And while rejections of this sort are not fun, that's just what they're supposed to provide. Your work needs to be strong enough to stand up to them.
Congratulations to the Science and Cell editorial teams (and reviewers) for not letting this get past them. I would guess that publication of these reports will occasion some very painful scenes over at Nature, though - we'll see if they have any comment.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
September 9, 2014
Google's Calico venture, the company's out-there move into anti-aging therapy, has made the news by signing a deal with AbbVie (the company most of us will probably go on thinking of as Abbott). That moves them into the real world for sure, from the perspective of the rest of the drug industry, so it's worth taking another look at them. (It's also worth noting that Craig Venter is moving into this area, too, with a company called Human Longevity. Maybe as the tech zillionaires age we'll see a fair amount of this sort of thing).
On one level, I applaud Google's move. There's a lot of important work to be done in the general field of aging, and there are a lot of signs that human lifespan can be hacked, for want of a better word. The first thought some people have when they think of longer lifespan is that it could be an economic disaster. After all, a huge percentage of our healthcare money is already spent in the last years of life as it is - what if we make that period longer still? But it's not just sheer lifespan - aging is the motor behind a lot of diseases, making them more like to crop up and more severe when they do. The dream (which may be an unattainable one) is for longer human lifespans, in good health, without the years of painful decline that so many people experience. Even if we can't quite manage that, an improvement over the current state of things would be welcome. If people stay productive longer, and spend fewer resources on disabling conditions as they age, we can come out ahead on the deal rather than wondering how we could possibly afford it.
Google and AbbVie are both putting $250 million into starting a research site somewhere in the Bay area (and given the state of biotech out there, compared to a few years ago, it'll be a welcome addition). If things go well, each of them have also signed up to contribute as much as $500 million more to the joint venture, but we'll see if that ever materializes. What, though, are they going to be doing out there?
Details are still scarce, but FierceBiotechIT says that "a picture of an IT-enabled, omics-focused operation has emerged from media reports and early hiring at the startup". That sounds pretty believable, given Google's liking for (and ability to handle) huge piles of data. It also sounds like something that Larry Page and Sergey Brin would be into, given their past investments. But that still doesn't tell us much: any serious work in this area could be described in that fashion. We'll have to use up a bit more of our current lifespans before things get any clearer.
So I mentioned above that on one level I like this - what, you might be asking, is the other level on which I don't? My worry is what I like to call the Andy Grove Fallacy. I applied that term to Grove's "If we can improve microprocessors so much, what's holding you biotech people back"? line of argument. It's also a big part of the (in)famous "Can a Biologist Fix a Radio" article (PDF), which I find useful and infuriating in about equal proportions. The Andy Grove Fallacy is the confusion between man-made technology (like processor chips and radios) and biological systems. They're both complex, multifunctional, miniaturized, and made up of thousands and thousands of components, true. But the differences are more important than the similarities.
For one thing, human-designed objects are one hell of a lot easier for humans to figure out. With human-designed tech, we were around for all the early stages, and got to watch as we made all of it gradually more and more complicated. We know it inside out, because we discovered it and developed it, every bit. Living cells, well, not so much. The whole system is plunked right down in front of us, so the only thing we can do is reverse-engineer, and we most definitely don't have all the tools we need to do a good job of that. We don't even know what some of those tools might be yet. Totally unexpected things keep turning up as we look closer, and not just details that we somehow missed - I'm talking about huge important regulatory systems (like all the microRNA pathways) that we never even realized existed. No one's going to find anything like that in an Intel chip, of that we can be sure.
And that's because of the other big difference between human technology and biochemistry: evolution. We talk about human designs "evolving", but that's a very loose usage of the word. Real biological evolution is another thing entirely. It's not human, not at all, and it takes some time to get your head around that. Evolution doesn't do things the way that we would. It has no regard for our sensibilities whatsoever. It's a blind idiot tinkerer, with no shame and no sense of the bizarre, and it only asks two questions, over and over: "Did you live? Did you reproduce? Well, OK then." Living systems are full of all kinds of weird, tangled, hacked-together stuff, layer upon layer of it, doing things that we don't understand and can't do ourselves. There is no manual, no spec sheet, no diagram - unless we write it.
So people coming in from the world of things that humans built are in for a shock when they find out how little is known about biology. That's the shock that led to that Radio article, I think, and the sooner someone experiences it, the better. When Google's Larry Page is quoted saying things like this, though, I wonder if it's hit him yet:
One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy. We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think."
The problem is, cancer - unrestrained cellular growth - is intimately tied up with aging. Part of that is statistical. If you live long enough, you will surely come down with some form of cancer, whether it's nasty enough to kill you or benign enough for you to die of something else. But another connection is deeper, because the sorts of processes that keep cells tied down so that they don't take off and try to conquer the world are exactly the ones, in many cases, that we're going to have to tinker with to extend our lifespans. There are a lot of tripwires out there, and many of them we don't even know about yet. I'd certainly assume that Larry Page's understanding of all this is deeper than gets conveyed in a magazine article, but he (and the other Google folks) will need to watch themselves as they go on. Hubris often gets rewarded in Silicon Valley - after all, it's made by humans, marketed to humans, and is rewarded by human investors. But in the biomedical field, hubris can sometimes attract lightning bolts like you would not believe.
+ TrackBacks (0) | Category: Aging and Lifespan | Business and Markets
September 8, 2014
Just how reactive are chemical functional groups in vivo? That question has been approached by several groups in chemical biology, notably the Cravatt group at Scripps. One particular paper from them that I've always come back to is this one, where they profiled several small electrophiles across living cells to see what they might pick up. (I blogged about more recent effort in this vein here as well).
Now there's a paper out in J. Med. Chem. that takes a similar approach. The authors, from the Klein group at Heidelberg, took six different electrophiles, attached them to a selection of nonreactive aliphatic and aromatic head groups, and profiled the resulting 72 compounds across a range of different proteins. There are some that are similar to what's been profiled in the Cravatt papers and others (alpha-chloroketones), but others I haven't seen run through this sort of experiment at all.
And what they found confirms the earlier work: these things, even fairly hot-looking ones, are not all that reactive against proteins. Acrylamides of all sorts were found to be quite clean, with no inhibition across the enzymes tested, and no particular reaction with GSH in a separate assay. Dimethylsulfonium salts didn't do much, either (although a couple of them were unstable to the assay conditions). Chloroacetamides showed the most reactivity against GSH, but still looked clean across the enzyme panel. 2-bromodihydroisoxazoles showed a bit of reactivity, especially against MurE (a member of the panel), but no covalent binding could be confirmed by MS (must be reversible). Cyanoacetamides showed no reactivity at all, and neither did acyl imidazoles.
Now, there are electrophiles that are hot enough to cause trouble. You shouldn't expect clean behavior from an acid chloride or something, but the limits are well above where most of us think they are. If some of these compounds (like the imidazolides) had been profiled across an entire proteome, then perhaps something would have turned up at a low level (as Cravatt and Weerapana saw in that link in the first paragraph). But these things will vary compound by compound - some of them will find a place where they can sit long enough for a reaction to happen, and some of them won't. Here's what the authors conclude:
An unexpected but significant consequence of the present study is the relatively low inhibitory potential of the reactive compounds against the analyzed enzymes. Even in cytotoxicity assays and when we looked for inhibitor enzyme adduct formation we did not find any elevated cytotoxicity or unspecific modification of proteins. Particularly in the case of chloroacetylamides/-anilides and dimethylsulfonium salts, which we consider to be among the most reactive in this series, this is a promising result. From these results the following consequences for moderately reactive groups in medicinal chemistry can be drawn. Promiscuous reactivity and off-target effects of electrophiles with moderate reactivity may often be overestimated. It also does not appear justified to generally exclude “reactive” moieties from compound libraries for screening purposes, since the nonspecific reactivity may turn out to be much inferior than anticipated.
There are a lot of potentially useful compounds that most of us have never thought of looking at, because of our own fears. We should go there.
+ TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays
September 5, 2014
Here's what sounds like a good idea from VC firm Index Ventures, from the latest issue of BioCentury (same one I referenced the other day). Like many others in the biopharma venture capital world, they're trying to run the "killer experiment" as soon as possible, to see which ideas for new companies look solid. Unlike the others, though, they're planning a web site where they will detail the successes - and the failures. Here's an example:
Founded in 2013, (Purple Pharmaceuticals) was started to identify small molecule inhibitors of proprotein convertase subtilisin/kexin type 9. Two mAbs against PCSK9 are already in Phase III testing to treat hypercholesterolemia with regulatory submissions expected this year: evolocumab from Amgen Inc. and alirocumab from partners Regeneron Pharmaceuticals Inc. and Sanofi.
Grainger said the antibodies have limitations, as they require high doses to suppress PCSK9 activity and once-weekly or once-monthly infusions. Thus a pill that could match the PCSK9 inhibition of the biologics could be “the holy grail” of lowering LDL cholesterol.
Purple began by trying to identify small molecules that were highly selective for PCSK9 over other proprotein convertases because, as Grainger noted, “PCSK9 is a member of a large family of enzymes that do some pretty critical things.”
The killer experiment, he said, “was to ask could we make a small molecule that was selective over these other proprotein convertases, and could we demonstrate that it would lower LDL cholesterol?”
After a year, Purple had identified some hits selective for PCSK9, but a conversation with researchers at the Genentech Inc. unit of Roche led to the realization that the virtual company would need to run a second experiment.
“We learned from that interaction with Genentech that they had also run a PCSK9 screening program the same way we had,” Grainger said. “They discovered that their hits, while preventing PCSK9 from cleaving an external substrate, did not prevent PCSK9 from cleaving itself.”
Purple learned that in vivo, PCSK9 is auto-activated by cleaving itself — meaning the only important interaction to inhibit is PCSK9 auto- activation, not interactions with external substrates.
Purple’s second experiment showed none of its small molecule hits that inhibited PCSK9 interaction with an external substrate also inhibited auto-activation.
“Therefore we were able to kill a project which had spent at that stage only about £300,000 over a year, only to discover at the critical moment that it didn’t have the profile that we wanted,” Grainger said. “We were able to terminate that without having created any infrastructure, without having spent a painful amount of money prosecuting the project.”
That story illustrates a number of points about drug discovery. First off, congratulations to those involved for being able to definitively test a hypothesis; that's the engine at the heart of all scientific research. And as they say, it was good to be able to do that without having spent too much money and time, because both of those have a way of getting a bit out of control as complication after complication gets uncovered. Investors start getting jumpy when you keep coming back to them saying "Well, you know, it turns out that. . . ", but you know, it often turns out that way.
The next thing this story shows is that when you see an obvious gap in the landscape that there may well be a good reason for it. PCSK9 antibodies are widely thought to be potential blockbusters; a huge battle is shaping up in that area. So why no small molecules, eh? That's the question that launched Purple, it seems, and it's a valid one. But it turns out to have a valid answer, one that others in the field had already discovered. I suspect that the people behind this effort were, at the same time they were characterizing their lead molecules, also beating all the bushes for the sort of information that they obtained from Genentech. Somebody must have tried small-molecule PCSK9 inhibitors, you'd think, so what happened? Were those projects abandoned for good reasons, or was there still some opportunity there that a new company could claim for itself?
There may well be more to this story, though, than the Index Ventures people are saying. Update: there is - see the end of this post. The autocatalytic cleavage of PCSK9 was already well-known - pretty much everything in the that protease family works that way. (The difference is that with PCSK9, the prodomain part of the protein stays on longer - details of its cleavage were worked out in 2006). And in this 2008 paper from Journal of Lipid Research, we find this:
Several approaches for inhibiting PCSK9 function are theoretically feasible. Because autocatalytic cleavage is required for the maturation of PCSK9, a small-molecule inhibitor of autocatalysis might be useful, provided that it was specific for PCSK9 processing and did not lead to a toxic accumulation of misfolded PCSK9. Small molecules that block the PCSK9-LDL receptor interactions would likely be efficacious, although designing inhibitors of protein-protein interactions is a tall order. Antisense approaches pioneered by Isis Pharmaceuticals (Carlsbad, CA) are well suited for liver targets, and studies in mice suggest that this approach is efficacious for PCSK9. Finally, there is considerable interest in developing antibody therapeutics to inhibit PCSK9-LDL receptor interactions.
Even more to the point is the paper that that JLR piece is commenting on. That one demonstrates, through studies of mutated PCSK9 proteins, that its catalytic activity does not seem to be necessary at all for its effects on LDL receptors (a result that had already been suggested in cell assays). Taken together, you'd come away with the strong impression that inhibiting PCSK9's catalytic activity, other than stopping it from turning itself into its active form, had a low probability of doing anything to cholesterol levels. And you'd come away with that impression in 2008, at the latest.
So Purple's idea was a longer shot than it appeared on the surface, not that the real information was exactly buried deep in the literature. They shouldn't have needed someone at Genentech to tell them that PCSK9's autocatalysis was the real target - I've never worked in the area at all, and I found this out in fifteen minutes on PubMed while riding in to work. They must have had more reason to think that an assay for PCSK9's exogenous activity would be worth running - either that, or this story has gotten garbled along the way.
But this example aside, I applaud the idea of making these early-stage calls public. And I agree with the Index Ventures folks that this should actually help academics and others unused to drug discovery to see what needs to be done to actually launch an idea out into the world. I look forward to seeing the web site - and perhaps to hearing a bit more about what really happened at Purple.
Update: David Grainger of Index Ventures has more in the comments, and says that there is indeed more to the story. He points out that mutations of PCSK9 were found that inhibited its autocatalytic activity (such as this one), and that work had appeared that suggested that molecules that inhibited only the autocatalytic activity could be useful. This is what Purple was seeking - the BioCentury piece makes things sound a bit different (see above), but the problem seems to have been that molecules that inhibited PCSK9's activity against other substrates turned out not to inhibit its activity against itself. If I'm interpreting this right, then, Genentech's contribution was to point out that the autocatalytic activity couldn't be modeled by looking at another substrate.
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Assays
September 4, 2014
A reader sends along this paper, on some small molecules targeting the C2 domain of coagulation factor VIII. It illustrates some points that have come up around here over the years, that's for sure. The target is not a particularly easy one: a hit would have to block the interaction of that protein domain with a membrane surface. There is something of a binding pocket down in that region, though, and there were some hits reported from a screen back in 2004. Overall, it looks like a lot of targets that show up, especially these days - you're trying to affect protein conformation by going after a not-necessarily-made-for-small-molecules cavity. Possible, but not something that's going to light up a screening deck, either.
And many of the things that do show up are going to be false positives of one sort or another. That's always the tricky part of doing low-hit-rate screening. The odds are excellent that any given "hit" will turn out not to be real, since the odds are against having any hits at all. This is especially a red flag when you screen something like this and you get a surprisingly decent hit rate. You should suspect fluorescence interference, aggregation, impurities, any of the other myriad ways that things can be troublesome rather than assume that gosh, this target is easier than we thought.
It's often a chemist who's in charge of dumping these buckets of cold water (if you have the help of the people who set up the assay, so much the better). Traditionally, it's one of the biology project champions who gets enthusiastic about the great list of compounds, but if you have someone who's been burned by false positives a few times, then so much the better, too. It's not fun to knock down all these "hits" and "leads", but someone's got to do it, otherwise everyone's time will be wasted to an even more painful extent.
And you should be especially worried when your screen turns up compounds like some of the ones in this paper. Yep, it's our old friends the rhodanines, everybody's cheap date of the screening deck. These compounds have come up around here many times, because they keep on showing up in the flippin' literature. In this case, the authors did some virtual screening over the ChemBridge collection and then moved on to assays against the protein itself, eventually finding a number of active compounds in the micromolar range. The compounds look a lot like the ones from 2004, since those were used as the template for screening, and that was a pretty ugly rhodanine-infested set, too.
Indeed most of the compounds they found are pretty unattractive - the aforementioned rhodanines, lots of nitroaromatics, some other heterocycles that also hit more often than one would like. I would feel better about these sorts of papers if the authors acknowledged somewhere that some of their structures are frequent hitters and might be problematic, but you don't often see that: a hit is a hit, and everything's equally valid, apparently. I would also feel better if there were something in the experimental section about how all the compounds were assayed by LC/MS and NMR, but you don't often see that, either, and I don't see it here. Implicitly trusting the label is not a good policy. Even if the particular compounds are the right ones in this case, not checking them shows a lack of experience (and perhaps too trusting a nature where organic chemistry is concerned).
But let's cross our fingers and assume that these are indeed the right compounds. What does it mean when your screening provides you with a bunch of structures like this? The first thing you can say is that your target is indeed a low-probability one for small molecules to bind to - if most everything you get is a promiscuous-looking ugly, then the suspicion is that only the most obliging compounds in a typical screening collection will bother looking at your binding site at all. And that means that if you want something better, you're really going to have to dig for it (and dig through a mound of false positives and still more frequent hitters to find it).
Why would you want to do that? Aren't these tool compounds, useful to find out more about the biology and behavior of the target? Well, that's the problem. If your compounds are rhodanines, or from other such badly-behaved classes, then they are almost completely unsuitable as tool compounds. You especially don't want to trust anything they're telling you in a cellular (or worse, whole-animal) assay, because there is just no telling what else they're binding to. Any readout from such an assay has to be viewed with great suspicion, and what kind of a tool is that?
Well then, aren't these starting points for further optimization? It's tempting to think so, and you can give it a try. But likely as not, the objectionable features are the ones that you can't get rid of very easily. If you could ditch those without paying too much of a penalty, you would have presumably found more appealing molecules in your original screen and skipped this stage altogether. You might be better off running a different sort of screen and trying for something outside of these classes, rather than trying to synthesize a silk purse out of said sow's ear. If you do start from such a structure, prepare for a lot of work.
As mentioned, the problem with a lot of papers that advance such structures is that they don't seem to be aware of these issues at all. If they are, they certainly don't being them up (which is arguably even worse). Then someone else comes along, who hasn't had a chance to learn any of this yet, either, and reads the paper without coming out any smarter. They may, in fact, have been made slightly less competent by reading it, because now they think that there are these good hits for Target Z, for one thing, and that the structures shown in the paper must be OK, because here they are in this paper, with no mention of any potential problems.
The problem is, there are a lot of interesting targets out there that tend to yield just these sorts of hits. My own opinion is that you can then say that yes, this target can (possibly) bind a small molecule, if those hits are in fact real, but just barely. If you don't even pick up any frequent hitters, you're in an even tougher bind, but if all you pick up are frequent hitters, it doesn't mean that things are that much easier.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | The Scientific Literature
September 3, 2014
A reader sends along this query, and since I've never worked around monoclonal antibodies, I thought I'd ask the crowd: how much of a read on safety do you get with a mAb in Phase I? How much Phase I work would one feel necessary to feel safe going on to Phase II, from a tox/safety standpoint? Any thoughts are welcome. I suspect the answer is greatly going to depend on what said antibody is being raised to target.
+ TrackBacks (0) | Category: Drug Development | Toxicology
I always enjoy BioCentury's "Back to School" issue this time of year, and this time they're being more outspoken than usual. (That link is free access). The topic is pricing:
Last year, (we) argued biopharma companies can no longer assume the market will support premium pricing, even for drugs that deliver meaningful and measurable improvements over the standard of care.
This year, BioCentury’s 22nd Back to School essay goes on to argue that the last bastion of free pricing is crumbling, and biotech and pharma had better start experimenting with new pricing models based on value for money while they still have the chance.
The wake-up call was the launch of Sovaldi sofosbuvir from Gilead Sciences Inc.
Payers, reimbursement authorities and health technology assessment agencies almost universally — with the exception of Germany — acknowledge the drug is a breakthrough for patients with HCV.
At $84,000, the drug is clearly cost effective for a subset of HCV patients who would otherwise progress to expensive sequelae such as liver transplant. But its broad indication includes a majority of patients whose disease won’t progress to the point of costly interventions. And doing the math makes it obvious that treating even a fraction of eligible patients would be a staggering sum for payers to absorb.
What Gilead has done - thanks, guys - is to accelerate a number of trends that were already looking like trouble.
With Sovaldi as the stimulus, government officials, payers, reimbursement authorities and patient groups are fighting back against high drug prices with renewed vigor. For these stakeholders, biopharma’s arguments that drug developers must be compensated for the cost and risk of creating medical breakthroughs don’t hold water.
The easiest response of payers and consumers to industry’s argument is: not my problem.
Far worse, biopharma’s historical arguments about the cost and risk of drug development are giving ammunition to academics, legislators, health technology assessment bodies and payers to argue that the costs of developing and manufacturing drugs plus a “reasonable” margin should be the basis for price.
Industry needs to wrest the discussion away from a cost-plus system that would essentially turn biopharmaceutical companies into utilities, cutting off the lifeblood of innovation.
We seem to be too busy testing the limits of what insurance will pay for to worry about that right now, unfortunately. As the essay goes on to show, companies (Gilead and Alexion, for starters) are getting requests from regulators and legislators to provide an exact breakdown of just what it cost to develop their latest drugs. Demonstrate to us, in other words, that your pricing is justified. The next step beyond that is for these authorities to disagree with the numbers and their interpretation, and to suggest - and then enforce - their own. And I'm pretty sure that the industry would rather avoid that.
Cost-plus pricing places no value on the benefits provided by medicines and eliminates the incentives for biopharma industry innovation and for risk-taking in poorly understood diseases where many failures are likely.
The right question is not how much does R&D cost, but how to measure the benefit to the patient, payer and society; how to value that benefit over time; and how to distribute the risk should the expected benefit not be realized.
The answers are not obvious, and many approaches will need to be tested. Undoubtedly, in many settings the current systems for data collection, coding and reimbursement are not adequate to the task.
But that is no excuse for inaction. The current system of drug pricing and reimbursement is unsustainable and will be fixed — with or without the industry’s participation.
Biocentury suggests several things that should be looked at. First of all would be pricing per course of treatment, rather than per unit dose. That brings the spotlight more on what the drug is supposed to be accomplishing - and if that also spotlights some of them that aren't accomplishing as much as they're supposed to be, well, so be it. The industry should also consider risk-sharing arrangements, to take on more of the downside if a drug doesn't work as well as anticipated, with the opportunity to pick up more gains if it exceeds. Another idea would be pricing models where the payments are spread out over the time that the drug benefits a patient, rather than all of it being up front. In general, we need to make the connection between new drugs and their benefits easier to see, which in turn makes their pricing easier to see.
And if some of those prices turn out to be too high, well, that's our problem. We have to be ready to accept it when we have drugs that don't work as well for some conditions as we want. Only if we can do that can we turn around and charge the higher prices for the ones that are truly effective. I've made the argument many times that companies, not just drug companies, should be able to charge what they want to for their goods. And in the abstract, that's true. But in the world we live in, politics will intrude, big-time, if the drug industry tries to always extract the maximum revenue for everything, every time.
The downside for biopharma companies would be lower prices for drugs that provide incremental or modest benefits. But that reality is coming one way or another. The upside is a better shot at continued premium prices for real breakthroughs — although probably not as high as historical premiums — plus the potential for preferred formulary placement and earlier market access for many drugs.
But it's a tragedy-of-the-commons situation, because even though some of these ideas for different pricing, and even the calls for restraint, may well be in the industry's best interests, individual companies look at each other and say "You first". But as the old political saying has it, if you're not at the table, then you're on the menu.
+ TrackBacks (0) | Category: Drug Prices
September 2, 2014
I wanted to let readers know of a fun new book that's out this week. Randall Munroe, of webcomic XKCD fame, has written What If?: Serious Scientific Answers to Absurd Hypothetical Questions. There are a lot of truly odd ones in there, and he takes them on as best he can. I'm glad to say that I'm quoted in the chapter on "What would happen if you made a periodic table of cube-shaped bricks, where each brick was made of the corresponding element?" (That should give you an idea of the sorts of questions that come in to him; it makes my mail look fairly sane by comparison). And no, you wouldn't want to do that one - consider astatine and francium, for starters.
+ TrackBacks (0) | Category: Book Recommendations
There is some good news from the clinic today. Novartis reported data on LCZ696, a combination therapy for congestive heart failure, and the results have really grabbed a lot of attention. (The trial had been stopped early back in March, so the news was expected to be good). This is a combo of the angiotensin II antagonist valsartan and a neprilysin (neutral endopeptidase) inhibitor, AHU-377.
Compared to enalapril, the standard ACE inhibitor therapy for CHF, the Novartis combo lowered the risk of cardiovascular death by 20% and the risk of hospitalization by 21%, while having at least as good a safety profile as the generic ACE drug. Those are powerful arguments for the company to make, both to physicians and to insurance payers, so the future of the therapy, barring any sudden misfortunes, looks assured. There's not a lot that you can do for people with congestive heart failure as it is, and this looks like a real advance.
As Matthew Herper mentions, though, this isn't the first time that a similar combination has been tried in CHF. A few years ago, Bristol-Myers Squibb had a major failure with a single drug that inhibited both the ACE and neprilysin enzyme pathways, Vanlev (omapatrilat). That compound had a persistent problem with angioedema, as detailed here, and that led to its eventual rejection by the FDA on risk/benefit grounds, after a great deal of expensive Phase III work. Back in 2002, in the early days of this blog, I predicted that no ACE/endopeptidase combination would ever see the light of day again, which shows you how much I know about it. But I wasn't alone, that's for sure. It's very interesting and surprising that LCZ696 has worked out as well as it has, and it's a very worthwhile question to wonder what the difference could have been. Balance between the two pathways? Having an receptor antagonist on the ACE end rather than an enzyme inhibitor? Whatever it was, it seems to have done the trick.
The only question I have about the new combo is how it would compare to an ACE/diuretic combination, which (from what I know) is also a standard course of therapy for CHF patients. On the other hand, you'd expect that a diuretic might also be added to LCZ696 treatment - it was shown that it could be combined with omapatrilat, since they're all different mechanisms.
And one other point - I always make this one in these kind of situations. I'm willing to bet that critics of the drug industry, who like to go on about "me-too" drugs and lazy industrial research efforts, would have had LCZ696 on the list of eye-rolling follow-up drugs (that is, if they'd been paying attention at all). I mean, the angiotensin pathway is thoroughly covered by existing drugs, and neprilysin/NEP has been targeted before, too (both by omapatrilat and by Pfizer's so-called "female Viagra", UK-414,495). But there's an awful lot we don't know about human medicine, folks.
Update: here's a deep look at the IP and patent situation around the combo.
Update 2: and here's a detailed exchange about the way the trial was conducted and the drug's possible impact.
+ TrackBacks (0) | Category: "Me Too" Drugs | Cardiovascular Disease | Clinical Trials
Exelixis is a company with a very interesting history, but that's in the sense of "much rather read about it than experience it", like the interesting parts in a history book. At one point they had a really outsized pipeline of kinase inhibitors, to the point where it could be hard to keep track of everything, but these projects have largely blown up over the last few years. Big collaboration deals have been wound down, compounds have been returned to them, and so on.
Most recently, the company has been developing cabozantinib for prostate cancer. Along the way (2011) they had a dispute with the FDA about clinical trial design - the company had a much speedier surrogate endpoint in mind, but the agency wasn't having it. At this point, there are enough options in that area to make overall survival the real endpoint that matters, and the FDA told them to go out and get that data instead of messing around with surrogates. So the company plowed ahead, and yesterday announced Phase III results. They weren't good. The compound showed some effects in progression-free survival (PFS), but seems to have no benefit in the longer-running overall survival (OS) measurement. And that one's the key.
There's no way to put a good spin on it, either. The same press release that announced the results also announced that the company was going to have to "initiate a significant workforce reduction" in order to make it through the two other ongoing cabozantinib trials (for renal cell carcinoma and advanced hepatocellular carcinoma). Exelixis has had some pretty brutal workforce reductions over the years already, so this would appear to be cutting down as far as things can be cut (from 330 employees down to 70). And those two remaining indications are tough ones, too - if the compound shows efficacy, it'll be very good news, but those are not the first battlefields you'd choose to fight on. The prostate results don't offer much room for optimism, but on the other hand, the compound has orphan drug status for medullary thyroid cancer, for which it has shown real benefit in a disease that otherwise has no real treatment at all.
So Exelixis will try to stay alive long enough to get through these last trials, and if nothing comes up there, I'd have to think that this will be it for them. You wouldn't have predicted this back in about 2002, but you can't predict anything important in this industry to start with.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
August 29, 2014
I'm going to be taking an extra day of vacation before the kids start back to school, so I'm adding to the Labor Day weekend today. Blogging will resume on Tuesday, unless something gigantic happens before then. If I can come up with something appropriate, maybe I'll put up a recipe!
+ TrackBacks (0) | Category: Blog Housekeeping
August 28, 2014
Here's a short video history of the FDA, courtesy of BioCentury TV. The early days, especially Harvey Wiley and the "Poison Squad", are truly wild and alarming by today's standards. But then, the products that were on the market back then were pretty alarming, too. . .
+ TrackBacks (0) | Category: Drug Industry History | Regulatory Affairs
A reader has sent along the question: "Have any repurposed drugs actually been approved for their new indication?" And initially, I thought, confidently but rather blankly, "Well, certainly, there's. . . and. . .hmm", but then the biggest example hit me: thalidomide. It was, infamously, a sedative and remedy for morning sickness in its original tragic incarnation, but came back into use first for leprosy and then for multiple myeloma. The discovery of its efficacy in leprosy, specifically erythema nodosum laprosum, was a complete and total accident, it should be noted - the story is told in the book Dark Remedy. A physician gave a suffering leprosy patient the only sedative in the hospital's pharmacy that hadn't been tried, and it had a dramatic and unexpected effect on their condition.
That's an example of a total repurposing - a drug that had actually been approved and abandoned (and how) coming back to treat something else. At the other end of the spectrum, you have the normal sort of market expansion that many drugs undergo: kinase inhibitor Insolunib is approved for Cancer X, then later on for Cancer Y, then for Cancer Z. (As a side note, I would almost feel like working for free for a company that would actually propose "insolunib" as a generic name. My mortgage banker might not see things the same way, though). At any rate, that sort of thing doesn't really count as repurposing, in my book - you're using the same effect that the compound was developed for and finding closely related uses for it. When most people think of repurposing, they're thinking about cases where the drug's mechanism is the same, but turns out to be useful for something that no one realized, or those times where the drug has another mechanism that no one appreciated during its first approval.
Eflornithine, an ornithine decarboxylase inhibitor, is a good example - it was originally developed as a possible anticancer agent, but never came close to being submitted for approval. It turned out to be very effective for trypanosomiasis (sleeping sickness). Later, it was approved for slowing the growth of unwanted facial hair. This led, by the way, to an unfortunate and embarrassing period where the compound was available as a cream to improve appearance in several first-world countries, but not as a tablet to save lives in Africa. Aventis, as they were at the time, partnered with the WHO to produce the compound again and donated it to the agency and to Doctors Without Borders. (I should note that with a molecular weight of 182, that eflornithine just barely missed my no-larger-than-aspirin cutoff for the smallest drugs on the market).
Drugs that affect the immune system (cyclosporine, the interferons, anti-TNF antibodies etc.) are in their own category for repurposing, I'd say, They've had particularly broad therapeutic profiles, since that's such a nexus for infectious disease, cancer, inflammation and wound healing, and (naturally) autoimmune diseases of all sorts. Orencia (abatacept) is an example of this. It's approved for rheumatoid arthritis, but has been studied in several other conditions, and there's a report that it's extremely effective against a common kidney condition, focal segmental glomerulosclerosis. Drugs that affect the central or peripheral nervous system also have Swiss-army-knife aspects, since that's another powerful fuse box in a living system. The number of indications that a beta-blocker like propanolol has seen is enough evidence on its own!
C&E News did a drug repurposing story a couple of years ago, and included a table of examples. Some others can be found in this Nature Reviews Drug Discovery paper from 2004. I'm not aware of any new repurposing/repositioning approvals since then, but there's an awful lot of preclinical and clinical activity going on.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Regulatory Affairs
August 27, 2014
Here is the updated version of the "smallest drugs" collection that I did the other day. Here are the criteria I used: the molecular weight cutoff was set, arbitrarily, at aspirin's 180. I excluded the inhaled anaesthetics, only allowing things that are oils or solids in their form of use. As a small-molecule organic chemist, I only allowed organic compounds - lithium and so on are for another category. And the hardest one was "Must be in current use across several countries". That's another arbitrary cutoff, but it excludes pemoline (176), for example, which has basically been removed from the market. It also gets rid of a lot of historical things like aminorex. That's not to say that there aren't some old drugs on the remaining list, but they're still in there pitching (even sulfanilamide, interestingly). I'm sure I've still missed a few.
What can be learned from this exercise? Well, take a look at those structures. There sure are a lot of carboxylic acids and phenols, and a lot more sulfur than we're used to seeing. And pretty much everything is polar, very polar, which makes sense: if you're down in this fragment-sized space, you've got to be making some strong interactions with biological targets. These are fragments that are also drugs, so fragment-based drug discovery people may find this interesting as the bedrock layer of the whole field.
Some of these are pretty specialized and obscure - you're only going to see pralidoxime if you have the misfortune to be exposed to nerve gas, for example. But there are some huge, huge compounds on the list, too, gigantic sellers that have changed their whole therapeutic areas and are still in constant use. Metformin alone is a constant rebuke to a lot of our med-chem prejudices: who among us, had we never heard of it, would not have crossed it off our lists of screening hits? So give these small things a chance, and keep an open mind. They're real, and they can really be drugs.
+ TrackBacks (0) | Category: Chemical News | Drug Industry History
What scientific journals can you not be bothered to keep up with? I know, sometimes it's tempting to answer "all of them", but a well-informed chemist really should watch what comes out in the better ones. But how about the not-so-better ones? The "Life's too short" ones? Reading journals by RSS gives a person some perspective on signal-to-noise.
One problem is that Elsevier's RSS feeds are sort of perpetually hosed. Are they working now? I haven't checked in a while, because I finally gave up on them. And that means that I don't regularly look at Tetrahedron Letters or Bioorganic and Medicinal Chemistry Letters, even though (once in a while) something interesting turns up there. I look at ACS Medicinal Chemistry Letters more often, just because it has a working RSS feed (and I should note that I've rotated off their editorial board, by the way). Overall, though, I can't say that I miss either of those Elsevier journals, because you have to scroll through an awful lot of. . .stuff. . .to see something worth noting.
The same goes, I'm afraid, for Chemical Communications, and that makes me wonder if it's possible to keep up with the Letters/Communications style journals usefully at all. There are just so many papers pouring through them, and since Chem Comm takes them in from every sort of chemistry there is, vast numbers of them are of little interest to any particular reader. Their mini-review articles are perhaps an attempt to counteract this problem, and the journal also seems to have a slant towards "hot" topics. It's still in my RSS feed, but I look at the numbers of papers that pile up in it, and wonder if I should just delete and get it over with.
Organic Letters, on the other hand, I seem to be able to stay on top of, perhaps because it's focused down to at least organic chemistry (as opposed to Chem Comm). And I find a higher percentage of papers worth looking at than I do in Tet Lett (do others feel the same way?) And as for the other short-communications organic chemistry journals, I don't have them in the feed. Synthesis, Syn Comm, Synlett - writing this prompts me to go in and add them, but we'll see over the next couple of months if I regret it.
What it comes down to is that there's room for only a certain number of titles that can be followed as the papers publish. (The rest of them turn up in literature searches, responses to directed queries). And there are only a certain number of titles that are worth following in real time. So to get back to the question at the start of the post, which well-known journals do you find to be not worth the trouble?
+ TrackBacks (0) | Category: The Scientific Literature
August 26, 2014
There have been several analyses that have suggested that phenotypic drug discovery was unusually effective in delivering "first in class" drugs. Now comes a reworking of that question, and these authors (Jörg Eder, Richard Sedrani, and Christian Wiesmann of Novartis) find plenty of room to question that conclusion.
What they've done is to deliberately focus on the first-in-class drug approvals from 1999 to 2013, and take a detailed look at their origins. There have been 113 such drugs, and they find that 78 of them (45 small molecules and 33 biologics) come from target-based approaches, and 35 from "systems-based" approaches. They further divide the latter into "chemocentric" discovery, based around known pharmacophores, and so on, versus pure from-the-ground-up phenotypic screening, and the 33 systems compounds then split out 25 to 8.
As you might expect, a lot of these conclusions depend on what you classify as "phenotypic". The earlier paper stopped at the target-based/not target-based distinction, but this one is more strict: phenotypic screening is the evaluation of a large number of compounds (likely a random assortment) against a biological system, where you look for a desired phenotype without knowing what the target might be. And that's why this paper comes up with the term "chemocentric drug discovery", to encompass isolation of natural products, modification of known active structures, and so on.
Such conclusions also depend on knowing what approach was used in the original screening, and as everyone who's written about these things admits, this isn't always public information. The many readers of this site who've seen a drug project go from start to finish will appreciate how hard it is to find an accurate retelling of any given effort. Stuff gets left out, forgotten, is un- (or over-)appreciated, swept under the rug, etc. (And besides, an absolutely faithful retelling, with every single wrong turn left in, would be pretty difficult to sit through, wouldn't it?) At any rate, by the time a drug reaches FDA approval, many of the people who were present at the project's birth have probably scattered to other organizations entirely, have retired or been retired against their will, and so on.
But against all these obstacles, the authors seem to have done as thorough a job as anyone could possibly do. So looking further at their numbers, here are some more detailed breakdowns. Of those 45 first-in-class small molecules, 21 were from screening (18 of those high-throughput screening, 1 fragment-based, 1 in silico, and one low-throughput/directed screening). 18 came from chemocentric approaches, and 6 from modeling off of a known compound.
Of the 33 systems-based drugs, those 8 that were "pure phenotypic" feature one antibody (alemtuzumab) which was raised without knowledge of its target, and seven small molecules: sirolimus, fingolimod, eribulin, daptomycin, artemether–lumefantrine, bedaquiline and trametinib. The first three of those are natural products, or derived from natural products. Outside of fingolimod, all of them are anti-infectives or antiproliferatives, which I'd bet reflects the comparative ease of running pure phenotypic assays with those readouts.
Here are the authors on the discrepancies between their paper and the earlier one:
At first glance, the results of our analysis appear to significantly deviate from the numbers previously published for first-in-class drugs, which reported that of the 75 first-in-class drugs discovered between 1999 and 2008, 28 (37%) were discovered through phenotypic screening, 17 (23%) through target-based approaches, 25 (33%) were biologics and five (7%) came from other approaches. This discrepancy occurs for two reasons. First, we consider biologics to be target-based drugs, as there is little philosophical distinction in the hypothesis driven approach to drug discovery for small-molecule drugs versus biologics. Second, the past 5 years of our analysis time frame have seen a significant increase in the approval of first-in-class drugs, most of which were discovered in a target-based fashion.
Fair enough, and it may well be that many of us have been too optimistic about the evidence for the straight phenotypic approach. But the figure we don't have (and aren't going to get) is the overall success rate for both techniques. The number of target-based and phenotypic-based screening efforts that have been quietly abandoned - that's what we'd need to have to know which one has the better delivery percentage. If 78/113 drugs, 69% of the first-in-class approvals from the last 25 years, have come from target-based approaches how does that compare with the total number of first-in-class drug projects? My own suspicion is that target-based drug discovery has accounted for more than 70% of the industry's efforts over that span, which would mean that systems-based approaches have been relatively over-performing. But there's no way to know this for sure, and I may just be coming up with something that I want to hear.
That might especially be true when you consider that there are many therapeutic areas where phenotypic screening basically impossible (Alzheimer's, anyone?) But there's a flip side to that argument: it means that there's no special phenotypic sauce that you can spread around, either. The fact that so many of those pure-phenotypic drugs are in areas with such clear cellular readouts is suggestive. Even if phenotypic screeningwere to have some statistical advantage, you can't just go around telling people to be "more phenotypic" and expect increased success, especially outside anti-infectives or antiproliferatives.
The authors have another interesting point to make. As part of their analysis of these 113 first-in-class drugs, they've tried to see what the timeline is from the first efforts in the area to an approved drug. That's not easy, and there are some arbitrary decisions to be made. One example they give is anti-angiogenesis. The first report of tumors being able to stimulate blood vessel growth was in 1945. The presence of soluble tumor-derived growth factors was confirmed in 1968. VEGF, the outstanding example of these, was purified in 1983, and was cloned in 1989. So when did the starting pistol fire for drug discovery in this area? The authors choose 1983, which seems reasonable, but it's a judgment call.
So with all that in mind, they find that the average lead time (from discovery to drug) for a target-based project is 20 years, and for a systems-based drug it's been 25 years. They suggest that since target-based drug discovery has only been around since the late 1980s or so, that its impact is only recently beginning to show up in the figures, and that it's in much better shape than some would suppose.
The data also suggest that target-based drug discovery might have helped reduce the median time for drug discovery and development. Closer examination of the differences in median times between systems-based approaches and target-based approaches revealed that the 5-year median difference in overall approval time is largely due to statistically significant differences in the period from patent publication to FDA approval, where target-based approaches (taking 8 years) took only half the time as systems-based approaches (taking 16 years). . .
The pharmaceutical industry has often been criticized for not being sufficiently innovative. We think that our analysis indicates otherwise and perhaps even suggests that the best is yet to come as, owing to the length of time between project initiation and launch, new technologies such as high-throughput screening and the sequencing of the human genome may only be starting to have a major impact on drug approvals. . .
Now that's an optimistic point of view, I have to say. The genome certainly still has plenty of time to deliver, but you probably won't find too many other people saying in 2014 that HTS is only now starting to have an impact on drug approvals. My own take on this is that they're covering too wide a band of technologies with such statements, lumping together things that have come in at different times during this period and which would be expected to have differently-timed impacts on the rate of drug discovery. On the other hand, I would like this glass-half-full view to be correct, since it implies that things should be steadily improving in the business, and we could use it.
But the authors take pains to show, in the last part of their paper, that they're not putting down phenotypic drug discovery. In fact, they're calling for it to be strengthened as its own discipline, and not (as they put it) just as a falling back to the older "chemocentric" methods of the 1980s and before:
Perhaps we are in a phase today similar to the one in the mid-1980s, when systems-based chemocentric drug discovery was largely replaced by target-based approaches. This allowed the field to greatly expand beyond the relatively limited number of scaffolds that had been studied for decades and to gain access to many more pharmacologically active compound classes, providing a boost to innovation. Now, with an increased chemical space, the time might be right to further broaden the target space and open up new avenues. This could well be achieved by investing in phenotypic screening using the compound libraries that have been established in the context of target-based approaches. We therefore consider phenotypic screening not as a neoclassical approach that reverts to a supposedly more successful systems-based method of the past, but instead as a logical evolution of the current target-based activities in drug discovery. Moreover, phenotypic screening is not just dependent on the use of many tools that have been established for target-based approaches; it also requires further technological advancements.
That seems to me to be right on target: we probably are in a period just like the mid-to-late 1980s. In that case, though, a promising new technology was taking over because it seemed to offer so much more. Today, it's more driven by disillusionment with the current methods - but that means, even more, that we have to dig in and come up with some new ones and make them work.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History
August 25, 2014
Mentioning such a small compound as pirfenidone prompts me to put up the graphic shown below: these are the smallest commonly used drugs that I can think of. (OK, there's cocaine as a nasal anaesthetic - no, really - but that's where I draw the line at "commonly used". Nominations for ones that I've missed are welcome, and I'll update the list as needed. Note: four more have been added since the initial post, with more to come. This sort of thing really makes a chemist think, though - some of these compounds are very good indeed at what they do, and have been wildly successful. We need to keep an open mind about small molecules, that's for sure, no matter how small they are.
Update: see this follow-up post for the latest version of the graphic.
+ TrackBacks (0) | Category: Drug Industry History
It has been a bizarre ride for InterMune, its employees, and its investors. But now it ends with Roche buying them for $8.3 billion dollars, a sum that would have brought incredulous stares a few years ago. The deal makes sense for Roche, and it will provide investors a rationale for years as they buy into small biopharma companies - trying to pick the next InterMune, you know.
+ TrackBacks (0) | Category: Business and Markets
Experimental and Clinical Cardiology used to be a reputable journal. Now it's a trash heap piled with crap. No, literally - the Ottawa Citizen newspaper has proof, thanks to reporter Tom Spears (who's an experienced hand at this). The journal was sold last year, and the new owners will publish absolutely anything you send them, as long as you send them $1200 to their bank account in the Turks and Caicos Islands. I wish I were making all that up, but that is exactly how it goes, offshore banking and all.
Spears whipped together a gibberish cardiology paper by taking one about HIV and doing a find-and-replace to substitute "cardiology" for "HIV" wherever it occurred. I'm sure it reads just fine, if you're high on crack. He stripped out all the graphics, wrote up some captions for new ones, but didn't send any graphs or figures with his submission. No problemo, dude! Paper accepted! As soon as the money shows up under that palm tree in the Caribbean, this junk will become the latest contribution to the medical literature.
The "journal" lists an affiliation with the International Academy of Cardiovascular Sciences in Winnipeg, which organization is pretty upset about that, since there's no connection at all any more. But how to get that fixed? The phone number listed for the editorial office doesn't work. And they don't respond to any emails that they don't feel like responding to, which I'd guess are all the ones that don't involve the possibility of $1200 wire transfers.
The wonderful people behind this scam will ride it as long as a shred of reputation clings to the journal's name, or as long as people send them money, whichever comes first. The journal's web site, which I will consider linking to if they pay me twelve hundred dollars, looks legit, except for the slightly-shaky-English-style notice that "Starting from Jan 1, 2013, Experimental and Clinical Cardiology Journal will operate under new publishing group". If you click "Editorial Board", it tells you that a new one is coming soon. And this part is pretty interesting, too - they say that they provide:
. . .outstanding service to authors through a clear and fast editorial process. Review and decision will be fast and our editorial policy is clear: we will either accept your manuscript for publication or not, our editors will not ask for additional research.
All submissions will be peer reviewed, and our reviewers are asked to focus their attention to data presented in the article. Your manuscript, after the review process can be or accepted or declined. Three independent reviewers are reviewing each manuscript and if two of them accept the manuscript then your work will be published without any further corrections. Note that we will not reject a manuscript because it is out of scope or for its perceived importance, novelty or ability to attract citations: we will publish any study that is scientifically sound.
Yeah boy! But as it says under "Publication Fees", "Open access publishing is not without its costs". One of those costs should be the scientific credibility of anyone who sends a paper in to the place these days. I've looked over the most recent papers listed on the web site - there's one from a hospital in Barcelona, a university in Turkey, an institute in China, some group from Italy whose paper doesn't load well, and a bunch of people with German-sounding names whose paper appears to be two pages long and consists of one figure and no text. An erratum? Who can tell? And who would bother? You might as well copy-and-paste some old Star Wars fan-fiction; no one's going to notice. Every single one of these lead authors probably had their paper turn around within a couple of days, and sent $1200 to the flipping Turks and Caicos without batting an eye, for a journal that's supposedly based in Switzerland. For shame.
No getting around it: if you send money to any of the publishers on Beall's List, you are funding a bunch of scam artists. And if you use such a paper to pad your own c.v., then you've decided to become a scam artist yourself.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
August 22, 2014
Science has an article by journalist Ken Garber on palbociclib, the Pfizer CDK4 compound that came up here the other day when we were discussing their oncology portfolio. You can read up on the details of how the compound was put in the fridge for several years, only to finally emerge as one of the company's better prospects. The roots of the project go back to about 1995 at Parke-Davis:
Because the many CDK family members are almost identical, “creating a truly selective CDK4 inhibitor was very difficult,” says former Parke-Davis biochemist Dave Fry, who co-chaired the project with chemist Peter Toogood. “A lot of pharmaceutical companies failed at it, and just accepted broad-spectrum CDK inhibitors as their lead compounds.” But after 6 years of work, the pair finally succeeded with the help of some clever screens that could quickly weed out nonspecific “dirty” compounds.
Their synthesis in 2001 of palbociclib, known internally as PD-0332991, was timely. By then, many dirty CDK inhibitors from other companies were already in clinical trials, but they worked poorly, if at all. Because they hit multiple CDK targets, these compounds caused too much collateral damage to normal cells. . .Eventually, most efforts to fight cancer by targeting the cell cycle ground to a halt. “Everything sort of got hung up, and I think people lost enthusiasm,” Slamon says.
PD-0332991 fell off the radar screen. Pfizer, which had acquired Warner-Lambert/Parke-Davis in 2000 mainly for the cholesterol drug Lipitor, did not consider the compound especially promising, Fry says, and moved it forward haltingly at best. “We had one of the most novel compounds ever produced,” Fry says, with a mixture of pride and frustration. “The only compound in its class.”
A major merger helped bury the PD-0332991 program. In 2003, Pfizer acquired Swedish-American drug giant Pharmacia, which flooded Pfizer's pipeline with multiple cancer drugs, all competing for limited clinical development resources. Organizational disarray followed, says cancer biologist Dick Leopold, who led cancer drug discovery at the Ann Arbor labs from 1989 to 2003. “Certainly there were some politics going on,” he says. “Also just some logistics with new management and reprioritization again and again.” In 2003, Pfizer shut down cancer research in Ann Arbor, which left PD-0332991 without scientists and managers who could demand it be given a chance, Toogood says. “All compounds in this business need an advocate.”
So there's no doubt that all the mergers and re-orgs at Pfizer slowed this compound down, and no doubt a long list of others, too. The problems didn't end there. The story goes on to show how the compound went into Phase I in 2004, but only got into Phase II in 2009. The problem is, well before that time it was clear that there were tumor types that should be more sensitive to CDK4 inhibition. See this paper from 2006, for example (and there were some before this as well).
It appears that Pfizer wasn't going to develop the compound at all (thus that long delay after Phase I). They made it available as a research tool to Selina Chen-Kiang at Weill Cornell, who saw promising results with mantle cell lymphoma, then Dennis Slamon and RIchard Finn at UCLA profiled the compound in breast cancer lines and took it into a small trial there, with even more impressive results. And at this point, Pfizer woke up.
Before indulging in a round of Pfizer-bashing, though, It's worth remembering that stories broadly similar to this are all too common. If you think that the course of true love never did run smooth, you should see the course of drug development. Warner-Lambert (for example) famously tried to kill Lipitor more than once during its path to the market, and it's a rare blockbuster indeed that hasn't passed through at least one near-death-experience along the way. It stands to reason: since the great majority of all drug projects die, the few that make it through are the ones that nearly died.
There are also uncounted stories of drugs that nearly lived. Everyone who's been around the industry for a while has, or has heard, tales of Project X for Target Y, which was going along fine and looked like a winner until Company Z dropped for Stupid Reason. . .uh, Aleph. (Ran out of letters there). And if only they'd realized this, that, and the other thing, that compound would have made it to market, but no, they didn't know what they had and walked away from it, etc. Some of these stories are probably correct: you know that there have to have been good projects dropped for the wrong reasons and never picked up again. But they can't all be right. Given the usual developmental success rates, most of these things would have eventually wiped out for some reason. There's an old saying among writers that the definition of a novel is a substantial length of narrative fiction that has something wrong with it. In the same way, every drug that's on the market has something wrong with it (usually several things), and all it takes is a bit more going wrong to keep it from succeeding at all.
So where I fault Pfizer in all this is in the way that this compound got lost in all the re-org shuffle. If it had developed more normally, its activity would have been discovered years earlier. Now, it's not like there are dozens of drugs that haven't made it to market because Pfizer dropped the ball on them - but given the statistics, I'll bet that there are several (two or three? five?) that could have made it through by now, if everyone hadn't been so preoccupied with merging, buying, moving, rearranging, and figuring out if they were getting laid off or not.
The good thing is that other companies stepped into the field on the basis of those earlier publications, and found CDK4/6 inhibitors of their own (notably Novartis and Lilly). This is why I think that huge mergers hurt the intellectual health of the drug industry. Take it to the reducio ad not all that absurdum of One Big Drug Company. If we had that, and only that, then whole projects and areas of research would inevitably get shelved, and there would be no one left to pick them up at all. (I'll also note, in passing, that should all of the CDK inhibitors make it to market, that there will be yahoos who decry the whole thing as nothing but a bunch of fast-follower me-too drugs, waste of time and money, profits before people, and so on. Watch for it.)
+ TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History
August 21, 2014
So here's a question for the medicinal chemists: how come we don't like bromoaromatics so much? I know I don't, but I have trouble putting my finger on just why. I know that there's a ligand efficiency argument to be made against them - all that weight, for one atom - but there are times when a bromine seems to be just the thing. There certainly are such structures in marketed drugs. Some of the bad feelings around them might linger from the sense that it's sort of unnatural element, as opposed to chlorine, which in the form of chloride is everywhere in living systems.
But bromide? Well, for what it's worth, there's a report that bromine may in fact be an essential element after all. That's not enough to win any arguments about putting it into your molecules - selenium's essential, too, and you don't see people cranking out the organoselenides. But here's a thought experiment: suppose you have two drug candidate structures, one with a chlorine on an aryl ring and the other with a bromine on the same position. If they have basically identical PK, selectivity, preliminary tox, and so on, which one do you choose to go on with? And why?
If you chose the chloro derivative (and I think that most medicinal chemists instinctively would, for just the same hard-to-articulate reasons we're talking about), then what split in favor of the bromo compound would be enough to make you favor it? How much more activity, PK coverage, etc. do you need to make you willing to take a chance on it instead?
+ TrackBacks (0) | Category: Drug Development | Odd Elements in Drugs | Pharmacokinetics | Toxicology
Edward Zartler ("Teddy Z" of the Practical Fragments blog) has a short piece in the latest ACS Medicinal Chemistry Letters on fragment-based drug discovery. He applies the term "fragonomics" to the field (more on this in a moment), and provides a really useful overview of how it should work.
One of his big points is that fragment work isn't so much about using smaller-than-usual molecules, as it is using molecules that make only good interactions with the target.. It's just that smaller molecules are far more likely to achieve that - a larger one will have some really strong interactions, along with some things that actually hurt the binding. You can start with something large and hack pieces of it off, but that's often a difficult process (and you can't always recapitulate the binding mode, either). But if you have a smaller piece that only makes a positive interaction or two, then you can build out from that, tiptoeing around the various landmines as you go. That's the concept of "ligand efficiency", without using a single equation.
He also emphasizes that having a simpler molecule to work on means that the SAR can be tested and expanded quickly, often without anyone hitting the lab bench at all. You can order things up from the vendors or raid your own screening collection for close analogs. This delays the entry of the medicinal chemists to the project, which (considering that their time is always in demand) is a feature to be happy about.
The article ends up by saying that "Fragonomics has won the field. . .The age of the medchemist is over; now is the time of the biophysicist." I don't know if that's quite the way to win friends and influence people, though. Medicinal chemists are rather sensitive to threats to their existence (with good reason), so my worry is that coming on like this will make chemists who haven't tried it leery of fragment-based drug design in general. I'm also not thrilled with "fragonomics" as a term (just as I'm not thrilled with most of the newly-coined "omics" terms). The word doesn't add anything; it's just a replacement for having to say "fragment-based drug discovery" or "FBDD" all the time. It's not that we don't need a replacement for the unwieldy phrase - it's just that I think that many people might (by now) be ready to dismiss anything that's had "omics" slapped on it. I wish I had something better to offer, but I'm coming up blank myself.
+ TrackBacks (0) | Category: Drug Assays
August 20, 2014
Perseverance is a critical variable in drug discovery. Too little of it, and you are absolutely guaranteed to fail - no drug has ever made it to market without trying the patience of everyone involved. Too much of it, and you are very nearly guaranteed to waste all your money: most drug development projects don't work, and eventually reach a point where no amount of time or money could make them work, either. Many are the efforts where leaders have gritted their teeth, redoubled their efforts, and led everyone further into the abyss.
But sometimes these things come through, and that's what seems to have happened with Amicus and their drug migalastat for Fabry's. It's a protein chaperone, one the the emerging class of drugs that work by stabilizing particular protein conformations to help regain function. At the end of 2012, Amicus and their partner GSK announced clinical trial results that didn't meet significance, which prompted GlaxoSmithKline to return rights to the drug to Amicus.
Who kept on with it. And who announced today that the second Phase III study had come back positive, enough so that they plan to file for regulatory approval. (The belief is that the first Phase III enrolled an inappropriate mix of patients). Congratulations to the company, who may well have given many Fabry's patients their first opportunity for an oral therapy for their disease.
+ TrackBacks (0) | Category: Clinical Trials
John LaMattina has a look at Pfizer's oncology portfolio, and what their relentless budget-cutting has been doing to it. The company is taking some criticism for having outlicensed two compounds (tremelimumab to AstraZeneca and neratinib to Puma) which seem to be performing very well after Pfizer ditched them. Here's LaMattina (a former Pfizer R&D head, for those who don't know):
Unfortunately, over 15 years of mergers and severe budget cuts, Pfizer has not been able to prosecute all of the compounds in its portfolio. Instead, it has had to make choices on which experimental medicines to keep and which to set aside. However, as I have stated before, these choices are filled with uncertainties as oftentimes the data in hand are far from complete. But in oncology, Pfizer seems to be especially snake-bit in the decisions it has made.
That goes for their internal compounds, too. As LaMattina goes one to say, palbociclib is supposed to be one of their better compounds, but it was shelved for several years due to more budget-cutting and the belief that the effort would be better spent elsewhere. It would be easy for an outside observer to whack away at the company and wonder how incompetent they could be to walk away from all these winners, but that really isn't fair. It's very hard in oncology to tell what's going to work out and what isn't - impossible, in fact, after compounds have progressed to a certain stage. The only way to be sure is to take these things on into the clinic and see, unfortunately (and there you have one of the reasons things are so expensive around here).
Pfizer brought up more interesting compounds than it later was able to develop. It's a good question to wonder what they could have done with these if they hadn't been pursuing their well-known merger strategy over these years, but we'll never know the answer to that one. The company got too big and spent too much money, and then tried to cure that by getting even bigger. Every one of those mergers was a big disruption, and you sometimes wonder how anyone kept their focus on developing anything. Some of its drug-development choices were disastrous and completely their fault (the Exubera inhaled-insulin fiasco, for example), but their decisions in their oncology portfolio, while retrospectively awful, were probably quite defensible at the time. But if they hadn't been occupied with all those upheavals over the last ten to fifteen years, they might have had a better chance on focusing on at least a few more of their own compounds.
Their last big merger was with Wyeth. If you take Pfizer's R&D budget and Wyeth's and add them, you don't get Pfizer's R&D post-merger. Not even close. Pfizer's R&D is smaller now than their budget was alone before the deal. Pyrrhus would have recognized the problem.
+ TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development | Drug Industry History
August 19, 2014
Here's a very good review article in J. Med. Chem. on the topic of protein binding. For those outside the field, that's the phenomenon of drug compounds getting into the bloodstream and then sticking to one or more blood proteins. Human serum albumin (HSA) is a big player here - it's a very abundant blood protein that's practically honeycombed with binding sites - but there are several others. The authors (from Genentech) take on the disagreements about whether low plasma protein binding is a good property for drug development (and conversely, whether high protein binding is a warning flag). The short answer, according to the paper: neither one.
To further examine the trend of PPB for recently approved drugs, we compiled the available PPB data for drugs approved by the U.S. FDA from 2003 to 2013. Although the distribution pattern of PPB is similar to those of the previously marketed drugs, the recently approved drugs generally show even higher PPB than the previously marketed drugs (Figure 1). The PPB of 45% newly approved drugs is >95%, and the PPB of 24% is >99%. These data demonstrate that compounds with PPB > 99% can still be valuable drugs. Retrospectively, if we had posed an arbitrary cutoff value for the PPB in the drug discovery stage, we could have missed many valuable medicines in the past decade. We suggest that PPB is neither a good nor a bad property for a drug and should not be optimized in drug design.
That topic has come up around here a few times, as could be expected - it's a standard med-chem argument. And this isn't even the first time that a paper has come out warning people that trying to optimize on "free fraction" is a bad idea: see this 2010 one from Nature Reviews Drug Discovery.
But it's clearly worth repeating - there are a lot of people who get quite worked about about this number - in some cases, because they have funny-looking PK and are trying to explain it, or in some cases, just because it's a number and numbers are good, right?
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics
How many ways do we have to differentiate samples of closely related compounds? There's NMR, of course, and mass spec. But what if two compounds have the same mass, or have unrevealing NMR spectra? Here's a new paper in JACS that proposes another method entirely.
Well, maybe not entirely, because it still relies on NMR. But this one is taking advantage of the sensitivity of 19F NMR shifts to molecular interactions (the same thing that underlies its use as a fragment-screening technique). The authors (Timothy Swager and co-workers at MIT) have prepared several calixarene host molecules which can complex a variety of small organic guests. The host structures feature nonequivalent fluorinated groups, and when another molecule binds, the 19F NMR peaks shift around compared to the unoccupied state. (Shown are a set of their test analytes, plotted by the change in three different 19F shifts).
That's a pretty ingenious idea - anyone who's done 19F NMR work will hear about the concept and immediately say "Oh yeah - that would work, wouldn't it?" But no one else seems to have thought of it. Spectra of their various host molecules show that chemically very similar molecules can be immediately differentiated (such as acetonitrile versus propionitrile), and structural isomers of the same mass are also instantly distinguished. Mixtures of several compounds can also be assigned component by component.
This paper concentrates on nitriles, which all seem to bind in a similar way inside the host molecules. That means that solvents like acetone and ethyl acetate don't interfere at all, but it also means that these particular hosts are far from universal sensors. But no one should expect them to be. The same 19F shift idea can be applied across all sorts of structures. You could imagine working up a "pesticide analysis suite" or a "chemical warfare precursor suite" of well-chosen host structures, sold together as a detection kit.
This idea is going to be competing with LC/MS techniques. Those, when they're up and running, clearly provide more information about a given mixture, but good reproducible methods can take a fair amount of work up front. This method seems to me to be more of a competition for something like ELISA assays, answering questions like "Is there any of compound X in this sample?" or "Here's a sample contaminated with an unknown member of Compound Class Y. Which one is it?" The disadvantage there is that an ELISA doesn't need an NMR (with a fluorine probe) handy.
But it'll be worth seeing what can be made of it. I wonder if there could be host molecules that are particularly good at sensing/complexing particular key functional groups, the way that the current set picks up nitriles? How far into macromolecular/biomolecular space can this idea be extended? If it can be implemented in areas where traditional NMR and LC/MS have problems, it could find plenty of use.
+ TrackBacks (0) | Category: Analytical Chemistry
August 18, 2014
I spent the morning in the lab pretty much destroying whatever I touched: wrong solvents for chromatography, dropping things in the sink, bumping solutions all over the inside of my rota-vap. This is, though, a Monday, so at least I have that to blame. But if everyone started out the week the way I did, then scientific progress came to a juddering halt around 11 AM EST. My hope is that I can be less of a wrecking ball during the rest of the day and start working my way back into positive territory.
+ TrackBacks (0) | Category: Life in the Drug Labs
Here's a look back at the beginnings of ChemDraw, and you won't be surprised to hear that its origins go back to someone (Dave Evans' wife!) who'd had way too much of the old-fashioned style of structure drawing.
As I've mentioned here before, my grad school experience ended up being timed to experience both worlds. For my second-year continuation exam, I had to do the structures the classic way: green plastic template to make the chair and boat cyclohexanes all come out the same, rub-on letters for the atoms. If you wanted to copy a structure, well, you went down to the copier and you copied that structure. And you Frankensteined each scheme together with tape (matte, not shiny) or glue stick to make The Final Copy, rolling it into the typewriter to put in the captions and the text over the arrows. As I've always said, it was, in retrospect, not too far off from incising a buffalo-dung tablet with a sharpened stick and leaving it in the sun to dry.
It was a lot closer to that then it was to ChemDraw, that's for sure. (The sharpened stick would have worked pretty well with those rub-on letter transfers). And this is exactly what happened every time an organic chemist saw it in action:
The program developed little by little in this manner, with Sally channeling the needs of chemists and Rubenstein doing the programming. In July of 1985, ChemDraw premiered at the Gordon Research Conference on Reactions & Processes in New Hampshire. Rubenstein and the Evanses demonstrated it during a break in the conference. Bad weather kept the conferees indoors, so attendance was high.
Stuart L. Schreiber, then a chemistry professor at Yale University, saw the demo and recalls “knowing instantly that my prized drafting board and my obsessive drafting of chemical formulas were over.”
Schreiber holds the distinction of being the first person to purchase ChemDraw. “The impact of seeing ChemDraw on a Macintosh computer was dramatic and immediate,” he says. “There was no doubt that this was going to change the way chemists interact with each other and the rest of the scientific community,” he says. At the time Schreiber was proudly using his Xerox Memorywriter electronic typewriter with two lines of editable text. “The combination of the Macintosh computer and ChemDraw clearly demanded next-day adoption.” He rushed home to New Haven and placed his order.
That's just how it went. Every organic chemist who saw the program in action immediately wanted it; the superiority of the program to any of the manual methods was immediately and overwhelmingly obvious. You hear similar stories about people's reactions to the first spreadsheet program (VisiCalc) in the late 1970s, and for exactly the same reasons. Advances like these need no sales pitch at all - you could demo such things in complete silence for five minutes and people would line up with their money. I can remember seeing ChemDraw for the first time when I was at Duke, and being stunned by the idea of copying and pasting structures, resizing them, rotating them, joining them together, and (especially) saving the damned things for later.
So for my dissertation, which I started writing in late 1987, it was Word (3.02!) and ChemDraw all the way, and I was the first person in Duke's chemistry department to solo with those two for the PhD writeup. I did some of it on a Mac Plus and a lot of it on Mac SEs, switching floppy disks in and out. There was a Mac II down the hall, with a color screen and a 20 MB hard drive, and I really felt like I was on the cutting edge when I used that one. My lone disk with the manuscript in progress went unreadable and unrecoverable after two weeks of intermittent work, which taught me a lifelong lesson about making backups. Although it was a major pain to keep it up, I ended (with not-so-unusual grad student paranoia) by keeping five copies at all times: the current working copy, an extra one in the desk drawer in my lab, one back by my bench, one over in my apartment, and one in the glove compartment of my car.
My PhD advisor was not a computer user himself at the time, though, which led to an interesting scene when I did hand the manuscript over to him some months later (which process was an interesting story in itself, for another time). He got it back to me with a large number of hand-marked corrections, but as I flipped through the pages I realized that almost all of them were the same corrections, flagged every time that they appeared. I saw him that afternoon, and he asked if I'd seen his changes. I had, I told him, and I'd made al the corrections. He looked at me, puzzled, so I told him about the "Find and Replace" command, and he raised his eyebrows and said "That's very. . .convenient, isn't it?" "Sure is," I badly wanted to say. "Welcome to the fun-filled late 20th century, boss. Let's see, what else. . .we landed on the moon in '69. Oh, the Beatles broke up. And. . ."
But I didn't say any of that, of course. You don't go around saying things like that to your professor, especially when you're in the final stages of writing up, not unless you want to face the choice of going back to the lab for a couple more years or asking people if they'd like the Value Meal. No, facing your committee is preferable in every way.
+ TrackBacks (0) | Category: Graduate School
August 15, 2014
There's a post by Peter Bach, of the Center for Health Policy and Outcomes, that's been getting a lot of attention the last few days. It's called "Unpronounceable Drugs, Incomprehensible Prices", and you know what it says.
No, really, you do, even if you haven't seen it. Too high, unconscionable, market can't support, what they can get away with, every year, too high. Before I get to the uncomfortable parts of my own take on this, let me stipulate a couple of things up front: (1) I do think that the industry is inviting trouble for itself by the way it it raising prices. It is in drug companies' short term interest to do so, but long term I worry that it's going to bring on some sort of price-control regimen. (2) Some drug prices probably are too high (but see below for what that means). Big breakthroughs can, at least in theory, command high prices, but not everything deserves to be priced at the level it is.
I was about to say "see below" again, but this paragraph is below, so here goes. Let me quote a bit from Bach's article:
Cancer drug prices keep rising. The industry says this reflects the rising costs of drug development and the business risks they must take when testing new drugs. I think they charge what they think they can get away with, which goes up every year. . .Regardless of the estimate, the pricing of new drugs for cancer and now other common diseases has come unglued from the rationale the industry has long espoused. Instead, pricing is explained by a phenomenon of increasing boldness by the industry against a backdrop of regulators and insurers who have no legal authority to dictate or even propose alternative pricing models.
Bach's first assertion is correct: drug companies are charging what they think they can get away with. In that, they are joined by pretty much every other business in the entire country. I did a post once where I imagined car sales transplanted into the world of drug sales- you couldn't just walk in and buy a car, for example. No, you had to go to a car consultant first, licensed by the state, who would examine your situation and determine the sort of car you needed. Once they'd given you a car prescription, you could then go to a dealer.
Well, we don't have that, but what car companies do charge is, well, what they can get away with. The same as steel companies, soft drink companies, cardboard box companies, grocery stores, and people who are selling their houses. You charge what you think the market will bear. Even people selling basic necessities of life like food and shelter charge what they think the market will bear. It's true that health care does feel different from any of those (a point that I went into in that post linked in the last paragraph), and there's the root of many a problem.
And, some will say, a big difference is that none of these other sellers have patents on their side, the legal right to put the screws on. But remember the flip side of the patent system: the legal certainty that you will lose that pricing power on a set date. The pricing of new drugs is completely driven by their expected patent lifetimes, because almost all the money that the developing company is ever going to make off the drug is going to have to be made during that period.
And sometimes that period isn't very long. The patent clock starts ticking a long time before a drug ever gets on the market; there are often only five to ten years left when it's finally approved for sale. There are other factors, too. Everyone is talking about the price of Sovaldi for hepatitis C, but no as many people have thought about the fact that the drug is, in fact, so effective that it has blown two other recently approved Hep C treatments right out of the market, well before their patent lifetimes had even expired. There really is competition in the drug business, and that sector shows it in action.
Now, what there isn't so much of is competition on price, true. And that's what you do see in the other businesses I named above. There are grocery stores that occupy the "Wonderful Prestigious High Quality" part of the market, and others that occupy the "Low Low Prices Every Day" part. (And interestingly, if you Venn-diagram out what's on the shelves of those two, there's still some overlap, allowing you to watch people paying wildly different prices for blueberries that came off the same truck, not to mention even less perishable stuff like aluminum foil). You don't see this in the drug industry, partly because for patented drugs we're never selling the same blueberries. the same gasoline, or the same khaki trousers. Even the biggest "me-too" drugs still differ from each other to some degree.
And that brings up another point. Bach uses (as his example of pricing in the cancer field) two Alk compounds, Xalkori (crizotinib) from Pfizer and Zykadia (ceritinib) from Novartis. Xalkori was first, and Bach makes a lot of the fact that Zykadia is priced higher, even though he says that Pfizer ran bigger clinical trials, had to work out the associated diagnostic test with the FDA, and launch the new mechanism into the oncology market. Novartis, he says, got to piggy-back on all that, and yet their drug is priced higher. There can be no other reason for that pricing decision, Bach says, other than that they can.
Let's go into some details that Bach's article leaves out. Zykadia is indeed second to market. But the time gap between the two drugs means that Novartis was working on it before they knew that Xalkori worked in the clinic. Bach makes an error here made by many others who have not actually done drug discovery work: the time course of these things is longer than it looks. A screen had to be run against Alk, compounds had to be confirmed, a medicinal chemistry team had to optimize them and make lots of new structures, all of which except one fell by the side of the road. The compound had to go through animal tests for efficacy and safety, and it had to be scaled up and formulated. And so on, and so on. Novartis did not sit back, watch Xalkori succeed, and then decide "Hey, we should get us some of that action, too".
Now Zykadia is, as Bach says, a second-line therapy. But it's approved for patients who do not respond to, or have become intolerant to Xalkori. So this "me-too" drug is, in fact, different enough to work on patients for whom Xalkori has failed. In fact, most patients will start to show relapse inside of a year on Xalkori, so it would appear that most non-small-cell lung cancer patients with multiyear survival are probably going to end up taking both compounds. Cancers mutate quickly, and we need all the options we can get - and guess what, some of those options are going to be second to market, because they can't all be first.
Another point to note is that while Zykadia was indeed approved on the basis of a smaller clinical trial set, that's because it received "breakthrough" designation from the FDA for accelerated review and approval. Startlingly, it actually got approved after Phase I trials alone. (Not bad for what Bach characterizes as a simple copycat drug, by the way). Novartis has run the compound in more clinical trials than that, and they continue to do so. It's not like they slipped in with a mere 163 patients and then trotted off to the FDA while brushing the dust off their hands. To find this out, by the way, you'll want to use "LDK378", the internal Novartis designation for the drug, and I'm passing this information on to Bach for free. Clinicaltrials.gov shows 13 trials in the US when you do that, and there are others outside the country as well.
Bach's article, as mentioned, plays down any differences between these two drugs, saying that "they have not been directly compared". But that's not accurate. Let me quote from that link in the paragraph just above:
As described by Shaw and colleagues in the New England Journal of Medicine, ceritinib has striking activity in ALK-rearranged NSCLC, both in treatment-naïve patients and in those who experienced tumor progression on crizotinib. . .The drug has clear pharmacological advantages over crizotinib. Its surprising level of activity in crizotinib-resistant tumors may be explained by its greater potency and its particular ability to inhibit ALK with gatekeeper mutations that confer resistance to crizotinib.
The two drugs have had a very important comparison: people who are going to die on Xalkori are going to survive longer if they switch to Zykadia. "Me-too" drug, my ass.
But rather than end on that note, tempting as that is, let me circle back to pricing once again. The price for these cancer drugs is not borne by individual patients emptying their piggy banks. It is borne by insurance, both private and government. And drug companies do indeed price their drugs at what the think the insurance plans will pay for them. This is not a secret, and should not be a surprise, and I continue to be baffled by people who react to this with horror and disbelief. Prices appear when you find out what the payers will pay. If Pfizer, Novartis, or Gilead priced their drugs at fifty million dollars a dose, no insurance company would reimburse. But the insurance companies are paying the current prices, and if they believe that they will be put out of business by doing so, they need to stop doing that. And they could.
They will, too, if we in the industry keep pushing them towards doing it. That's our big problem in drug development: our productivity has been too low, and we're making up for it by charging more money. But that can't go on forever. There are walls closing in on us from both sides, and we're going to have to scramble out from between them at some point. Pricing power can only take you so far.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Prices | Regulatory Affairs | Why Everyone Loves Us
August 14, 2014
A huge amount of what's actually going on inside living cells involves protein-protein interactions. Drug discovery, for obvious reasons, focuses on the processes that depend on small molecules and their binding sites (thus the preponderance of receptor ligands and enzyme inhibitors), but small molecules are only part of the story in there.
And we've learned a fair amount about all this protein-protein deal-making, but there's clearly a lot that we don't understand at all. If we did, perhaps we'd have more compounds that can target them. Here's a very basic topic about which we know very little: how tight are the affinities between all these interacting proteins? What's the usual level, and what's the range? What does the variation in binding constants say about the signaling pathways involved, and the sorts of binding surfaces that are being presented? How long do these protein complexes last? How weak can one of these interactions be, and still be physiologically important?
A new paper has something to say about that last part. The authors have found a bacterial system where protein phosphorylation takes place effectively although the affinity between the two partners (KD) is only around 25 millimolar. That's very weak indeed - for those outside of drug discovery, small-molecule drug affinities are typically well over a million times that level. We don't know how common or important such weak interactions are, but this work suggests that we're going to have to look pretty far up the scale in order to understand things, and that's probably going to require new technologies to quantify such things. Unless we figure out that huge, multipartner protein dance that's going on, with all its moves and time signatures, we're not going to understand biochemistry. The Labanotation for a cell would be something to see. . .
+ TrackBacks (0) | Category: Biological News | Chemical Biology