About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
March 30, 2012
Back in 2009 I wrote about a paper that found a number of small (and ugly) molecules which affected the Hedgehog signaling pathway. At the time, I asked if anyone had done any selectivity studies with them, or looked for any SAR around them, because they didn't look very promising to me.
I'm glad to report that there's a follow-up from the same lab, and it's a good one. They've spent the last two years chasing these things down, and it appears that one series (the HPI-4 compound in that first link, which is open-access) really does have a specific molecular target (dynein).
There are a number of good experiments in the paper showing how they narrowed that down, and the whole thing is a good example of just how granular cellular biology can get: this pathway out of thousands, that particular part of the process, which turns out to be this protein because of the way it interacts in defined ways with a dozen others, and moreover, this particular binding site on that one protein. It's worth reading to see how they chased all this down, but I'll take you right to the ending and say that it's the ATP-binding site on dynein that looks like the target.
Collectively, these results indicate that ciliobrevins are specific, reversible inhibitors of disparate cytoplasmic dynein-dependent processes. Ciliobrevins do not perturb cellular mechanisms that are independent of dynein function, including actin cytoskeleton organization and the mitogen-activated protein kinase and phosphoinositol-3-kinase signalling pathways. . .The compounds do not broadly target members of the AAA+ ATPase family either, as they have no effect on p97-dependent degradation of endoplasmic-reticulum-associated proteins or Mcm2–7-mediated DNA unwinding. . .Our studies establish ciliobrevins as the first small molecules known specifically to inhibit cytoplasmic dynein in vitro and in live cells.
So congratulations to everyone involved, at Stanford, Rockefeller, and Northwestern. These ciliobrevins are perfect examples of tool compounds. This is how academic science is supposed to work, and now we can perhaps learn things about dynein that no one has been able to learn yet, and that will be knowledge that no one can take away once we've learned it.
+ TrackBacks (0) | Category: Biological News
So Amylin really seems to have turned down a multi-billion dollar offer from Bristol-Myers Squibb. Who's going to buy them, if not BMS? Here's FiercePharma with a roundup.
+ TrackBacks (0) | Category: Business and Markets
Some of you may have seen this graduate student's comment in the Chronicle of Higher Education on his neuroscience PhD. He's worried about the job market, but takes the attitude that he can, in the end, do all sorts of things with his PhD. But what makes him so laid-back, I fear, is that he's not trying to make a career in the sciences:
To some people, this state of affairs has all the trappings of a pyramid scheme. Graduate schools and principal investigators take on too many students because they are inexpensive, work hard, and help to get papers published. At the same time, the graduate schools and investigators know full well that not all the students can move up the pyramid. In this view, the university is not an educator so much as a scientific sweatshop.
This all sounds like a horror story: Toil for years in obscurity, only to emerge from that dark tunnel onto a bridge to nowhere. But as I plan to leave academe to return to a full-time writing career, it is clear to me that this seductive explanation of supply and demand does not jibe with my experience as a doctoral student in the sciences, which has been full of teachable moments that I know will benefit me regardless of the specific work I pursue.
Chemjobber has a very good post on all this, to the effect that (1) getting that degree was not without its costs, in money and (especially) time, and (2) for many of those alternative careers, a science PhD would not have been the most efficient path, to put it mildly. Check out his take and the comments he's attracted, and see what you think.
+ TrackBacks (0) | Category: Graduate School
March 29, 2012
Nature has a comment on the quality of recent publications in clinical oncology. And it's not a kind one:
Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.
The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.
I think that this problem has been with us for quite a while, and that there are a few factors making it more noticeable: more journals to publish in, for one thing, and increased publication pressure, for another. And the online availability of papers makes it easier to compare publications and to call them up quickly; things don't sit on the shelf in quite the way that they used to. But there's no doubt that a lot of putatively interesting results in the literature are not real. To go along with that link, the Nature article itself referred to in that commentary has some more data:
Over the past decade, before pursuing a particular line of research, scientists. . .in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies. . . It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.
Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.
So what leads to these things not working out? Often, it's trying to run with a hypothesis, and taking things faster than they can be taken:
In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. . .
This can rise, on occasion, to the level of fraud, but it's not fraud if you're fooling yourself, too. Science is done by humans, and it's always going to have a fair amount of slop in it. The same issue of Nature, as fate would have it has a good example of irreproducibility this week. Sanofi's PARP inhibitor iniparib already wiped out in Phase III clinical trials not long ago, after having looked good in Phase II. It now looks as if the compound was (earlier reports notwithstanding) never much of a PARP1 inhibitor at all. (Since one of these papers is from Abbott, you can see that doubts had already arisen elsewhere in the industry).
That's not the whole story with PARP - AstraZeneca had a real inhibitor, olaparib, fail on them recently, so there may well be a problem with the whole idea. But iniparib's mechanism-of-action problems certainly didn't help to clear anything up.
Begley and Ellis call for tightening up preclinical oncology research. There are plenty of cell experiments that will not support the claims made for them, for one thing, and we should stop pretending that they do. They also would like to see blinded protocols followed, even preclinically, to try to eliminate wishful thinking. That's a tall order, but it doesn't mean that we shouldn't try.
Update: here's more on the story. Try this quote:
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."
+ TrackBacks (0) | Category: Cancer | Drug Assays | The Scientific Literature
I would not like to count the number of times I've said bad things here about pharma mergers. The best of them, as far as I can see, have been not all that harmful, and the worst have been near-disasters. As the undisputed M&A kings of the industry over the last twenty years, I've been especially hard on Pfizer.
And now that great big financial wheel is turning. Huge mergers and acquisitions appear, finally, to be going out of favor. Since there are no longer reputations to be made, bold visions to be realized, and (don't forget) massive fees to be earned by implementing such moves, the latest word is: breaking up. Spinning off. Leaner, meaner, core businesses, unlocking value, more focus, back to what they do best . .you can write the Wall Street reports as well as I can. Goldman Sachs is out this week making the breathless case for Pfizer doing just this.
I suspect we're in for years of this sort of thing, with Abbott's spinoff of their pharma business now looks like the starter's pistol going off. (They named the new company AbbVie, which I hope they didn't pay someone too much to think up, the same time disappointing the legions of fans who wanted to see it named Costello). Get ready for a long cycle of devolution.
+ TrackBacks (0) | Category: Business and Markets
March 28, 2012
A recent discussion with colleagues turned around the question: "Would you rather succeed ugly or fail gracefully?" In drug discovery terms, that could be rephrased "Would you rather get a compound through the clinic after wrestling with a marginal structure, worrying about tox, having to fix the formulation three times, and so on, or would you rather work on something that everyone agrees is a solid target, with good chemical matter, SAR that makes sense, leading to a potent, selective, clean compound that dies anyway in Phase II?"
I vote for option number one, if those are my choices. But here's the question at the heart of a lot of the debates about preclinical criteria: do more programs like that die, or do more programs like option number two die? I tend to think that way back early in the process, when you're still picking leads, that you're better off with non-ugly chemical matter. We're only going to make it bigger and greasier, so start with as pretty a molecule as you can. But as things go on, and as you get closer to the clinic, you have to face up to the fact that no matter how you got there, no one really knows what's going to happen once you're in humans. You don't really know if your mechanism is correct (Phase II), and you sure don't know if you're going to see some sort of funny tox or long-term effect (Phase III). The chances of those are still higher if your compound is exceptionally greasy, so I think that everyone can agree that (other things being equal) you're better off with a lower logP. But what else can you trust? Not much.
The important thing is getting into the clinic, because that's where all the big questions are answered. And it's also where the big money is spent, so you have to be careful, on the other side of the equation, and not just shove all kinds of things into humans. You're going to run out of time and cash, most likely, before something works. But if you kill everything off before it gets that far, you're going to run out of both of those, too, for sure. You're going to have to take some shots at some point, and those will probably be with compounds that are less than ideal. A drug is a biologically active chemical compound that has things wrong with it.
There's another component to that "fail gracefully" idea, though, and it's a less honorable one. In a large organization, it can be to a person's advantage to make sure that everything's being done in the approved way, even if that leads off the cliff eventually. At least that way you can't be blamed, right? So you might not think that an inhibitor of Target X is such a great idea, but the committee that proposes new targets does, so you keep your head down. And you may wonder about the way the SAR is being prosecuted, but the official criteria say that you have to have at least so much potency and at least so much selectivity, so you do what you have to to make the cutoffs. And on it goes. In the end, you deliver a putative clinical candidate that may not have much of a chance at all, but that's not your department, because all the boxes got checked. More to the point, all the boxes were widely seen to be checked. So if it fails, well, it's just one of those things. Everyone did everything right, everyone met the departmental goals: what else can you do?
This gets back to the post the other day on unlikely-looking drug structures. There are a lot of them; I'll put together a gallery soon. But I think it's important to look these things over, and to realize that every one of them is out there on the market. They're on the pharmacy shelves because someone had the nerve to take them into the clinic, because someone was willing to win with an ugly compound. Looking at them, I realize that I would have crossed off billions of dollars just because I didn't feel comfortable with these structures, which makes me wonder if I haven't been overvaluing my opinion in these matters. You can't get a drug on the market without offending someone, and it may be you.
+ TrackBacks (0) | Category: Drug Development | Life in the Drug Labs
March 27, 2012
We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).
Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.
And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:
In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.
This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?
+ TrackBacks (0) | Category: Business and Markets | Chemical News | Drug Development | Drug Industry History
March 26, 2012
I was having one of those "drug-like properties" discussions with colleagues the other day. Admittedly, if you're not in drug discovery yourself, you probably don't have that one very often, but even for us, you'd think that a lot of the issues would be pretty settled by now. Not so.
While everyone broadly agrees that compounds shouldn't be too large or too greasy, where one draws the line is always up for debate. And the arguments gets especially fraught in the earlier stages of a project, when you're still deciding on what chemical series to work on. One point of view (the one I subscribe to) says that almost every time, the medicinal chemistry process is going to make your compound larger and greasier, so you'd better start on the smaller and leaner side to give everyone room to work in. But sometimes, Potency Rules, at least for some people and in some organizations, and there's a lead which might be stretching some definitions but is just too active to ignore. (That way, in my experience, lies heartbreak, but there are people who've made successes out of it).
We've argued these same questions here before, more than once. What I'm wondering today is, what's the least drug-like drug that's made it? It's dangerous to ask that question, in a way, because it gives some people what they see as a free pass to pursue ugly chemical matter - after all, Drug Z made it, so why not this one? (That, to my mind, ignores the ars longa, vita brevis aspect: since there's an extra one-in-a-thousand factor with some compounds, given the long odds already, why would you make them even longer?)
But I think it's still worth asking the question, if we can think of what extenuating circumstances made some of these drugs successful. "Sure, your molecular weight isn't as high as Drug Z, which is on the market, but do you have Drug Z's active transport/distribution profile/PK numbers in mice? If not, just why do you think you're going to be so lucky?"
Antibiotics are surely going to make up some of the top ten candidates - some of those structures are just bizarre. There's a fairly recent oncology drug that I think deserves a mention for its structure, too. Anyone have a weirder example of a marketed drug?
What's still making its way through the clinic can be even stranger-looking. Some of the odder candidates I've seen recently have been for the hepatitis C proteins NS5A and NS5B. Bristol-Myers Squibb has disclosed some eye-openers, such as BMS-790052. (To be fair, that target seems to really like chemical matter like this, and the compound, last I heard, was moving along through the clinic.)
And yesterday, as Carmen Drahl reported from the ACS meeting in San Diego, the company disclosed the structure of BMS-791325, a compound targeting NS5B. That's a pretty big one, too - the series it came from started out reasonably, then became not particularly small, and now seems to have really bulked up, and for the usual reasons - potency and selectivity. But overall, it's a clear example of the sort of "compound bloat" that overtakes projects as they move on.
So, nominations are open for three categories: Ugliest Marketed Drug, Ugliest Current Clinical Candidate, and Ugliest Failed Clinical Candidate. Let's see how bad it gets!
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
March 23, 2012
I wanted to mention this news, since it's really the most wildly advanced form of "personalized medicine" that the world has yet seen. As detailed in this paper, Stanford professor Michael Snyder spent months taking multiple, powerful, wide-ranging looks at his own biochemistry: genomic sequences, metabolite levels, microRNAs, gene transcripts, pretty much the whole expensive high-tech kitchen sink. No one's ever done this to one person over an extended period - heck, until the last few years, no one's ever been able to do this - so Snyder and the team were interested to see what might come up. A number of odd things did:
Snyder had a cold at the first blood draw, which allowed the researchers to track how a rhinovirus infection alters the human body in perhaps more detail than ever before. The initial sequencing of his genome had also showed that he had an increased risk for type 2 diabetes, but he initially paid that little heed because he did not know anyone in his family who had had the disease and he himself was not overweight. Still he and his team decided to closely monitor biomarkers associated with the diabetes, including insulin and glucose pathways. The scientist later became infected with respiratory syncytial virus, and his group saw that a sharp rise in glucose levels followed almost immediately. "We weren't expecting that," Snyder says. "I went to get a very fancy glucose metabolism test at Stanford and the woman looked at me and said, 'There's no way you have diabetes.' I said, 'I know that's true, but my genome says something funny here.' "
A physician later diagnosed Snyder with type 2 diabetes, leading him to change his diet and increase his exercise. It took 6 months for his glucose levels to return to normal. "My interpretation of this, which is not unreasonable, is that my genome has me predisposed to diabetes and the viral infection triggered it," says Snyder, who acknowledges that no known link currently exists between type 2 diabetes and infection.
There may well be a link, but it may well also only be in Michael Snyder. Or perhaps in him and the (x) per cent of the population that share certain particular metabolic and genomic alignments with him. Since this is an N of 1 experiment if ever there was one, we really have no idea. It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.
But all of these technologies are getting cheaper. It's not around the corner, but I can imagine a day when people have continuous blood monitoring of this sort, a constant metabolic/genomic watchdog application that lets you know how things are going in there. Keep in mind, though, that I have a very lively imagination. I don't expect this (for better or worse) in my own lifetime. The very first explorers are just hacking their way into thickets of biochemistry larger and more tangled than the Amazon jungle - it's going to be a while before the shuttle vans start running.
+ TrackBacks (0) | Category: Biological News
Longtime readers will remember the interaction I had with Nativis Pharmaceuticals. That's the outfit that claims to be working with "drug signatures" instead of the drug molecules themselves. I found it interesting to see a company with all the outward trappings of a biotech startup that was spending its time (and its investors' money) on what sounded to me like the next thing to homeopathy. "Unique photon fields" of drug molecules? "Photon payloads" from "imprinted coherence domains"? You don't run across this sort of thing every day - well, not where I work.
When I said so, though, I got to hear from the company's chief legal counsel, and we ended up trading helpful advice. (Well, I thought my advice to him was helpful, at any rate). We have not crossed paths since.
But readers are reporting that the company's San Diego site appears to be emptied out, so I decided to check up on them. As it happens, they do not seem to be out of business: they're just moving to Seattle. Now, why the company seems to have pulled back its presence on its own web site, and on LinkedIn, etc., I don't know. And whatever happened to the publications that they were planning, and to their IND, I don't know, either. But at least as of last fall, they were a going concern. If any readers up the Northwest hear some news, please pass it on!
+ TrackBacks (0) | Category: Snake Oil
March 22, 2012
No time to update today, unfortunately - a blizzard of Real Science (TM) is keeping things hopping, and my new laptop died on the train this morning during my usual blogging time.
So I'll use this chance to throw out a question: what topics haven't come up around here lately that we should be talking about? Blogfodder comes in around here pretty regularly, and I have a few topics in the queue, but I want to be sure not to miss out on interesting stuff. Thanks, and I'll see everyone tomorrow. . .
+ TrackBacks (0) | Category: Blog Housekeeping
March 21, 2012
I wrote here about the Mayo v. Prometheus case, which dealt with patents on the use of thiopurines for autoimmune therapy. But the patents didn't claim any thiopurine drugs themselves. Or their specific use for autoimmune therapy. Or vehicles to administer them in, or methods for their manufacture, or techniques to package them. Nothing as reasonable as any of that.
No, these patents broke new ground. The problem is, the thiopurines are metabolized rather quickly, and to different degrees in different patients. That can make it tricky to know if someone's getting the right dosage - too much is bad, too little is bad. The Prometheus patents comprise these steps:
1. Telling a doctor to administer the drug to a patient.
2. Telling the doctor to measure the metabolite levels in their blood after dosing.
3. Describing the upper and lower acceptable bounds for these metabolites, and telling the doctor that these indicate a need to raise or lower the dosage.
There! That wasn't so good now, was it? I railed against this ridiculous idea at the time, which is tantamount to trying to patent the entire practice of medicine, step by step. (I know! I'll patent the idea of having a hypothesis, testing it by experiments where I manipulate individual variables, and then revising my hypothesis for the next round based on the results! Step three: profit!)
Fortunately for everyone's sanity, the Supreme Court has put the brakes on this stuff. Here comes the voice of reason:
Because the laws of nature recited by Prometheus’ patent claims—the relationships between concentrations of certain metabolites in the blood and the likelihood that a thiopurine drug dosage will prove ineffective or cause harm—are not themselves patentable, the claimed processes are not patentable unless they have additional features that provide practical assurance that the processes are genuine applications of those laws rather than drafting efforts designed to monopolize the correlations. . .
. . .This Court has repeatedly emphasized a concern that patent law not inhibit future discovery by improperly tying up the use of laws of nature and the like. See, e.g., Benson, 409 U. S., at 67, 68. Rewarding with patents those who discover laws of nature might encourage their discovery. But because those laws and principles are “the basic tools of scientific and technological work,” id., at 67, there is a danger that granting patents that tie up their use will inhibit future innovation, a danger that becomes acute when a patented process is no more than a general instruction to “apply the natural law,” or otherwise forecloses more future invention than the underlying discovery could reasonably justify. . .
Yes, yes, yes. This is Justice Breyer's opinion for the unanimous court (he asked the most questions during the oral arguments), and it's absolutely correct and a great pleasure to read. Let's hope we see no more of this nonsense. (Which means, I guess, that we'll just move on to fresh nonsense instead, but patent law is ever fruitful).
+ TrackBacks (0) | Category: Patents and IP
Eli Lilly is a drug company with a lot of problems - check out this chart for their patent expiration woes, which are probably the worst in the industry. But they're trying to make it up overseas, as this news shows:
CEO John Lechleiter wants Lilly to be the fastest-growing pharma company in China. To accomplish this goal, he will concentrate on diabetes and cancer over the next 5 years as the company introduces more than a dozen drugs to the market, according to a Bloomberg report.
The drugmaker has made major strides in China and has doubled its sales force there. Its efforts seem to have paid off: Sales in China grew 25% last year--that's faster than the industry average. And with diabetes rates rising there, Lilly might have an edge with its diabetes portfolio.
A lot of companies (and not just in the drug industry) are hoping for the Chinese market to save them, and in some cases, it'll happen. But since all our assets in pharma are wasting ones (patent expirations!), it doesn't do you much long-term good if you're not discovering new drugs quickly enough. Then again, "long term" has a different definition these days - "next couple of years" is probably about as good as any CEO in the business can hope for, and perhaps the China sales can cushion the blows a bit for Lilly. But I still think that it only moves them from "in hideous trouble" to "in very bad trouble".
+ TrackBacks (0) | Category: Business and Markets
March 20, 2012
There's more news in the area of looking at what a cancer really is, cell by cell. This topic has come up here before, and the newer sequencing technologies are going to make it a bigger and bigger deal.
This latest paper (in the NEJM) looked at samples from four patients with metastatic renal cell carcinoma (RCC). (Here are a couple of summaries if you don't have access). In the first patient, they sampled the primary tumor and a metastatic tumor from the chest wall after surgery. These were then divided into zones, and deep sequencing was done on the samples. Consistent with earlier work, they found a lot of heterogeneity:
(We) classified the remaining 128 mutations into 40 ubiquitous mutations, 59 mutations shared by several but not all regions, and 29 mutations that were unique to specific regions (so-called private mutations) that were present in a single region. We subdivided shared mutations into 31 mutations shared by most of the primary tumor regions of the nephrectomy specimen (R1 to R3, R5, and R8 to R9), pretreatment biopsy samples of the primary tumor, and 28 mutations shared by most of the metastatic regions. The detection of private mutations suggested ongoing regional clonal evolution.
A tumor, in other words, is a war zone of mutated cells. It's not so much that a single cell goes rogue and spreads out everywhere. It's that the conditions that allow a cell to become cancerous are conducive to further genetic instability, leading to a competition of different branches and mutant families within what might appear to be a single tumor sample. A single biopsy is not enough to tell you what's going on. The metastatic tumors, as you'd expect, tended to be derived from particular lineages that were more likely to break loose and spread, and then they continued to evolve in their new locations. But the nastiest cells win, and sometimes they end up looking rather similar:
Despite genetic divergence during tumor progression, phenotypic convergent evolution occurs, indicating a high degree of mutational diversity, a substrate for Darwinian selection, and evolutionary adaptation.
This sort of thing is making the earlier attempts at finding cancer biomarkers look rather naive. Not only is cancer not a single disease, and not only is a single type of cancer not a single type of cancer, but individual patients contain a multitude of different cancerous cell lines, which vary by location. We're going to have to do a lot more work to understand what's going on in there - a lot more samples, a lot more sequencing, and a lot more thought about what it all means. Personalized medicine is getting a lot more personal than we thought: cell by cell.
+ TrackBacks (0) | Category: Cancer
March 19, 2012
Update: fixed link in first paragraph - sorry!
According to this article, and some others like it over the last few years, we are. Sale of the US government's strategic helium reserve lowered prices, which led to increased use, which now appears to be leading to shortages. (If you want to see a distorted market, look no further).
I'm no expert in this field, but my guess is that we're mainly running out of cheap helium. Continued oil and natural gas exploration should reveal more of it, but just as those petrochemicals won't be cheaper, helium won't be, either. And come to think of it, I'm not sure how much helium is to be found in shale gas and the like, as opposed to traditional formations. Some quick Googling suggest that shale is too porous to contain much of the helium, which I can well believe.
So prepare to pay even more to keep those NMR magnets running - it's hard to imagine that it'll ever get cheaper than it's been. Peak Oil, I'm not so sure about, but Peak Helium may have already been realized. . .
+ TrackBacks (0) | Category: General Scientific News
So how do we deal with the piles of data? A reader sent along this question, and it's worth thinking about. Drug research - even the preclinical kind - generates an awful lot of information. The other day, it was pointed out that one of our projects, if you expanded everything out, would be displayed on a spreadsheet with compounds running down the left, and over two hundred columns stretching across the page. Not all of those are populated for every compound, by any means, especially the newer ones. But compounds that stay in the screening collection tend to accumulate a lot of data with time, and there are hundreds of thousands (or millions) of compounds in a good-sized screening collection. How do we keep track of it all?
Most larger companies have some sort of proprietary software for the job (or jobs). The idea is that you can enter a structure (or substructure) of a compound and find out the project it was made for, every assay that's been run on it, all its spectral data and physical properties (experimental and calculated), every batch that's been made or bought (and from whom and from where, with notebook and catalog references), and the bar code of every vial or bottle of it that's running around the labs. You obviously don't want all of those every time, so you need to be able to define your queries over a wide range, setting a few common ones as defaults and customizing them for individual projects while they're running.
Displaying all this data isn't trivial, either. The good old fashioned spreadsheet is perfectly useful, but you're going to need the ability to plot and chart in all sorts of ways to actually see what's going on in a big project. How does human microsomal stability relate to the logP of the right-hand side chain in the pyrimidinyl-series compounds with molecular weight under 425? And how do those numbers compare to the dog microsomes? And how do either of those compare to the blood levels in the whole animal, keeping in mind that you've been using two different dosing vehicles along the way? To visualize these kinds of questions - perfectly reasonable ones, let me tell you - you'll need all the help you can get.
You run into the problem of any large, multifunctional program, though: if it can do everything, it may not do any one thing very well. Or there may be a way to do whatever you want, if only you can memorize the magic spell that will make it happen. If it's one of those programs that you have to use constantly or run the risk of totally forgetting how it goes, there will be trouble.
So what's been the experience out there? In-house home-built software? Adaptations of commercial packages? How does a smaller company afford to do what it needs to do? Comments welcome. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Life in the Drug Labs
March 16, 2012
So the news is that Merck is now going to start its own nonprofit drug research institute in San Diego: CALIBR, the California Institute for Biomedical Research. It'll be run by Peter Schultz of Scripps, and they're planning to hire about 150 scientists (which is good news, anyway, since the biomedical employment picture out in the San Diego area has been grim).
Unlike the Centers for Therapeutic Innovation that Pfizer, a pharmaceutical company based in New York, has established in collaboration with specific academic medical centres around the country, Calibr will not be associated with any particular institution. (Schultz, however, will remain at Scripps.) Instead, academics from around the world can submit research proposals, which will then be reviewed by a scientific advisory board, says Kim. The institute itself will be overseen by a board of directors that includes venture capitalists. Calibr will not have a specific therapeutic focus.
Merck, meanwhile, will have the option of an exclusive licence on any proteins or small-molecule therapeutics to emerge. . .
They're putting up $90 million over the next 7 years, which isn't a huge amount. It's not clear if they have any other sources of funding - they say that they'll "access" such, but I have to wonder, since that would presumably complicate the IP for Merck. It's also not clear what they'll be working on out there; the press release is, well, a press release. The general thrust is translational research, a roomy category, and they'll be taking proposals from academic labs who would like to use their facilities and expertise.
So is this mainly a way for Merck to do more academic collaborations without the possible complications (for universities) of dealing directly with a drug company? Will it preferentially take on high-risk, high-reward projects? There's too little to go on yet. Worth watching with interest as it gets going - and if any readers find themselves interviewing there, please report back!
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development
As has been noted here in the comments sections, the RSS feeds of Elsevier's journals have been hosed, in various ways, for some time now. Things don't come through, or they don't come through correctly, or they're duplicated, or you get abstracts from journals that you never heard of. How many people - those of you who read journals via RSS - are experiencing these problems? And has anyone gotten Elsevier to respond to any complaints?
+ TrackBacks (0) | Category: The Scientific Literature
March 15, 2012
Here's a study that suggests that there are a lot more drug-drug interactions than we've ever recognized. (If you don't have access to Science Translational Medicine, here's a summary from Nature News).
Postmarketing surveillance yields huge piles of data that could potentially be mined for such, but it's a messy and heterogeneous pile. This study tries to correct for some of the confounding variables, by attempting to match each patient with a non-treated control patient with as many similarities as possible. They do look to have fished some useful correlations out that no one had ever observed before. For example, selective serotonin reuptake inhibitors (SSRIs) given with thiazide diuretics seem to be associated with a notably greater risk of the cardiac side effect of QT prolongation, which is a new one.
And while that's good news, you can't help but bury your head in your hands for a least a bit. It turns out that the average number of side effects listed on a full drug label is 69. That might seem to be quite enough, thanks, but this study suggests that about 329 different adverse effects per drug might be more accurate.
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
We've spent a lot of time here talking about provisional approval of drugs, most specifically Avastin (when its approval for metastatic breast cancer was pulled). But the idea isn't to put drugs on the market that have to be taken back; it's to get them out more quickly in case they actually work.
There's legislation (the TREAT Act) that is attempting to extend the range of provisional approvals. But according to this column by Avik Roy, an earlier version of the bill went much further: it would have authorized new approval pathways for the first drugs to treat specific subpopulations of an existing disease, nonresponders to existing therapies, compounds with demonstrable improvements in safety or efficacy, or (in general) compounds that "otherwise satisfy an unmet medical need". As with the existing accelerated approval process, drugs under these categories could (after negotiation with the FDA) be provisionally marketed after Phase II results, if those were convincing enough, with possible revocation after Phase III results came in.
Unlike the various proposals to put compounds on the market after Phase I (which I fear would be an invitation to game the system), this one strikes me as aggressive but sensible. It would, ideally, encourage companies to run more robust Phase II trials in the hopes of going straight to the market, and it would allow really outstanding drugs a chance to start earning back their R&D costs much earlier. As long as everyone understood that Phase III trials are no slam dunk any more (if they ever were), and that some of these drugs would turn out not to be as good as they looked, I think that on balance, everyone would come out ahead.
According to Roy, this version of the bill had (as you'd expect) attracted strong backers and strong opponents. On the "pro" side was BIO, the biotech industry group, which is no surprise. On the "anti" side, the FDA itself wasn't ready for this big a change, which isn't much of a shock, either. (To be fair to them, this would increase their workload substantially - you'd really want to couple a reform like this with more people on their end). And there were advocacy groups that worried that this new regulatory regime would water down drug safety requirements too much. The article doesn't name any groups, but anyone who's observed the industry can fill in some likely names.
But there was another big group opposing the change: PhRMA. Yes, the trade organization for the large drug companies. Opinions vary as to the reason. The official explanations are that they, too, were concerned for patient safety, and they wanted the PDUFA legislation renewed as is, without these extra provisions (a "bird in the hand" argument). But Roy's piece advances a less charitable thesis:
Sen. Hagan’s proposal would have been devastating to the big pharma R&D oligopoly. If small biotech companies could get their drugs tentatively approved after inexpensive phase II studies, they would have far less need to partner those drugs with big pharma. They could keep the upside themselves and attract far more interest from investors. Big pharma, on the other hand, would be without its largest source for innovative new medicines: the small biotech farm team.
I'd like to be able to doubt this reasoning more than I do. . .
+ TrackBacks (0) | Category: Drug Development | Regulatory Affairs
March 14, 2012
There's an on-line appendix to that Nature Reviews Drug Discovery article that I've been writing about, and I don't think that many people have read it yet. Jack Scannell, one of the authors, sent along a note about it, and he's interested to see what the readership here makes of it.
It gets to the point that came up in the comments to this post, about the order that you do your screening assays in (see #55 and #56). Do you run everything through a binding assay first, or do you run things through a phenotypic assay first and then try to figure out how they bind? More generally, with either sort of assay, is it better to do a large random screen first off, or is it better to do iterative rounds of SAR from a smaller data set? (I'm distinguishing those two because phenotypic assays provide very different sorts of data density than do focused binding assays).
Statistically, there's actually a pretty big difference there. I'll quote from the appendix:
Imagine that you know all of the 600,000 or so words in the English language and that you are asked to guess an English word written in a sealed envelope. You are offered two search strategies. The first is the familiar ‘20 questions’ game. You can ask a series of questions. You are provided with a "yes" or "no" answer to each, and you win if you guess the word in the envelope having asked 20 questions or fewer. The second strategy is a brute force method. You get 20,000 guesses, but you only get a "yes" or "no" once you have made all 20,000 guesses. So which is more likely to succeed, 20 questions or 20,000 guesses?
A skilled player should usually succeed with 20 questions (since 600,000 is less than than 2^20) but would fail nearly 97% of the time with "only" 20,000 guesses.
Our view is that the old iterative method of drug discovery was more like 20 questions, while HTS of a static compound library is more like 20,000 guesses. With the iterative approach, the characteristics of each molecule could be measured on several dimensions (for example, potency, toxicity, ADME). This led to multidimensional structure–activity relationships, which in turn meant that each new generation of candidates tended to be better than the previous generation. In conventional HTS, on the other hand, search is focused on a small and pre-defined part of chemical space, with potency alone as the dominant factor for molecular selection.
Aha, you say, but the game of twenty questions is equivalent to running perfect experiments each time: "Is the word a noun? Does it have more than five letters?" and so on. Each question carves up the 600,000 word set flawlessly and iteratively, and you never have to backtrack. Good experimental design aspires to that, but it's a hard standard to reach. Too often, we get answers that would correspond to "Well, it can be used like a noun on Tuesdays, but if it's more than five letters, then that switches to Wednesday, unless it starts with a vowel".
The authors try to address this multi-dimensionality with a thought experiment. Imagine chemical SAR space - huge number of points, large number of parameters needed to describe each point.
Imagine we have two search strategies to find the single best molecule in this space. One is a brute force search, which assays a molecule and then simply steps to the next molecule, and so exhaustively searches the entire space. We call this "super-HTS". The other, which we call the “Blackian demon” (in reference to the “Darwinian demon”, which is used sometimes to reflect ideal performance in evolutionary thought experiments, and in tribute to James Black, often acknowledged as one of the most successful drug discoverers), is equivalent to an omniscient drug designer who can assay a molecule, and then make a single chemical modification to step it one position through chemical space, and who can then assay the new molecule, modify it again, and so on. The Blackian demon can make only one step at a time, to a nearest neighbour molecule, but it always steps in the right direction; towards the best molecule in the space. . .
The number of steps for the Blackian demon follows from simple geometry. If you have a d dimensional space with n nodes in the space, and – for simplicity – these are arranged in a neat line, square, cube, or hypercube, you can traverse the entire space, from corner to corner with d x (n^(1/d)-1) steps. This is because each vertex is n nodes in length, and there are d vertices. . .When the search space is high dimensional (as is chemical space) and there is a very large number of nodes (as is the case for drug-like molecules), the Blackian demon is many orders of magnitude more efficient than super-HTS. For example, in a 10 dimensional space with 10^40 molecules, the Blackian demon can search the entire space in 10^5 steps (or less), while the brute force method requires 10^40 steps.
These are idealized cases, needless to say. One problem is that none of us are exactly Blackian demons - what if you don't always make the right step to the next molecule? What if your iteration only gives one out of ten molecules that get better, or one out of a hundred? I'd be interested to see how that affects the mathematical argument.
And there's another conceptual problem: for many points in chemical space, the numbers are even much more sparse. One assumption with this thought experiment (correct me if I'm wrong) is that there actually is a better node to move to each time. But for any drug target, there are huge regions of flat, dead, inactive, un-assayable chemical space. If you started off in one of those, you could iterate until your hair fell out and never get out of the hole. And that leads to another objection to the ground rules of this exercise: no one tries to optimize by random HTS. It's only used to get starting points for medicinal chemists to work on, to make sure that they're not starting in one of those "dead zones". Thoughts?
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History
March 13, 2012
India has decided to invoke compulsory licensing, and is approving a local generic company's application to make and sell Bayer's Nexavar (sorafenib).
I'm assuming that there's a political dimension to this that I'm not quite following. There must be something else going on between Bayer and the Indian authorities. Nexavar is indeed expensive, but (meaning no offense to the people who discovered it, whom I know), it's not the most necessary part of the oncology drug world, either. Anyone have more details?
The most recent time this issue has come up here is 2007.
+ TrackBacks (0) | Category: Drug Prices | Regulatory Affairs
Last fall, when Verastem announced their initial public offering, I wondered about how such an early-stage company (in such a speculative area) could plausibly offer stock. Now Nate Sadeghi-Nejad at TheStreet.com wonders the same thing:
Biotech companies with drugs in much later stages of clinical development find it difficult to go public today, yet here was Verastem, with nary a single patient exposed to any of its drugs, selling 5.5 million shares to the public at $10 per share.
Forty days later, the minimum time period allowed by law, sell-side analysts from all five of the investment banks which took Verastem public issued glowing reports with buy ratings and price targets 50% to 100% above the current share price.
Well, this sort of thing does happen. I mean, just because an investment bank makes money off an IPO doesn't mean that it isn't just a terrific place to put your money. Right? That's because they do lots of research on these things. Right? Well, as Sadeghi shows, that research assigned a Probability of Success of 30% to Verastem's plan of finding cancer-stem-cell specific therapeutics. This in an environment where the clinical failure rate is worse than 90%, and these guys haven't even been to the clinic yet. Their lead compound is salinomycin, an ionophore antibiotic which has been shown in vitro to target tumor stem cells.
Now, that's a perfectly respectable high-risk project to take on, because it has a lot of potential to go along with the risk. But a thirty per cent chance of success? There is no preclinical oncology program in the world with a thirty per cent chance of success. That figure is laughable.
I don't wish bad fortune to Verastem - I hope that their compound works. And I don't wish bad things for their investors, although I hope that they're braced for some. We need new modes of action in cancer drugs; we need for things to work. But we also need to be honest with ourselves and with investors. Investment banks are not going to do that for you, though.
+ TrackBacks (0) | Category: Business and Markets | Cancer
March 12, 2012
I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.
The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .
I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.
So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.
This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.
The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.
. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.
And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.
And that's how we find ourselves here:
. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.
That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
March 9, 2012
As some of you know, I'm guest-blogging at The Atlantic this week and next. I think the readership here would enjoy the post I have up today, which draws on some drug-industry experiences of mine. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
Apparently today is the day at AstraZeneca in Waltham. I'm hearing bits and pieces, but it looks like a substantial number of the research chemists there are being let go. Anyone with details, please add them to the comments.
+ TrackBacks (0) | Category: Business and Markets
The carbonyl group is one of the most fundamental structure in organic chemistry: C-double-bond-O. But you can substitute that oxygen with a sulfur and get to a whole new series of compounds - so how come we don't see so many of those in drugs?
Well, not all of them are stable. Plain old thioketones are pretty reactive, not to mention their appalling stink. And even though they're not as bad as thioketones, the corresponding thioamides and thioureas are known to be more lively than their oxygen counterparts. Many medicinal chemists avoid them because of a reputation for trouble, which I think is probably earned and not just an irrational prejudice. But there are drugs and pharmacological tools with these structures, still.
The thiocarbonyl shows up in a number of heterocycles, too, and there the situation gets a bit murkier. The highest-profile member of this group, unfortunately, may well be the rhodanines, which have come up several times on this blog, most recently here. I'm not a fan of those guys, but here's a question: are there thiocarbonyl structures that are better behaved? Do people like me look down on the whole functional group because of a few (well, more than a few) bad actors?
+ TrackBacks (0) | Category: Life in the Drug Labs
Here are the results of a salary and job satisfaction survey from Pharma Manufacturing. It's not a pretty picture, by any means, but it seems to have gotten a bit less nasty. That's downstream of R&D, where many readers of this blog reside, but it's worth a look.
+ TrackBacks (0) | Category: Business and Markets
March 8, 2012
There's another "Troubles of Drug Discovery" piece in Nature Reviews Drug Discovery, but it's a good one. It introduces the concept of "Eroom's Law", and if you haven't had your coffee yet (don't drink it, myself, actually), that's "Moore's Law" spelled backwards. It refers, as you'd fear, to processes that are getting steadily slower and more difficult with time. You know, like getting drugs to market seems to be.
Eroom's Law indicates that powerful forces have outweighed scientific, technical and managerial improvements over the past 60 years, and/or that some of the improvements have been less 'improving' than commonly thought. The more positive anyone is about the past several decades of progress, the more negative they should be about the strength of countervailing forces. If someone is optimistic about the prospects for R&D today, they presumably believe the countervailing forces — whatever they are — are starting to abate, or that there has been a sudden and unprecedented acceleration in scientific, technological or managerial progress that will soon become visible in new drug approvals.
Here's the ugly trend (dollars are inflation-adjusted:
I particularly enjoyed, in a grim way, this part:
However, readers of much of what has been written about R&D productivity in the drug industry might be left with the impression that Eroom's Law can simply be reversed by strategies such as greater management attention to factors such as project costs and speed of implementation, by reorganizing R&D structures into smaller focused units in some cases or larger units with superior economies of scale in others, by outsourcing to lower-cost countries, by adjusting management metrics and introducing R&D 'performance scorecards', or by somehow making scientists more 'entrepreneurial'. In our view, these changes might help at the margins but it feels as though most are not addressing the core of the productivity problem.
In the original paper, each of those comma-separated phrases is referenced to the papers that have proposed them, which is being rather scrupulously cruel. But I don't blame the authors, and I don't really disagree with their analysis, either. As they go on to say, investors don't seem to disagree, either. The cost-cutting that we're seeing everywhere, particularly cutbacks in research (see all that Sanofi stuff the other day!) are the clearest indicator. People are acting as if the return on pharmaceutical R&D is insufficient compared to the cost of capital, and if you think differently, well, now's a heck of a time to clean up as a contrarian.
Now, the companies (and CEOS) involved in this generally talk about how they're going to turn things around, how cutting their own research will put things on a better footing, how doing external deals will more than make up for it, and so on. But it's getting increasingly hard to believe that. We are heading, at speed, for a world in which fewer and fewer useful medicines are discovered, while more and more people want them.
The authors have four factors that they highlight which have gotten us into this fix, and all four of them are worth discussing (although not all in one post!) The first is what they call the "Better Than the Beatles" effect. That's what we face as we continue to compete against our greatest hits of the past. Take generic Lipitor, as a recent example. It's cheap, and it certainly seems to do the job it's prescribed for (lowering LDL). Between it and the other generic statins, you're going to have a rocky uphill climb if you want to bring a new LDL-lowering therapy to market (which is why not many people are trying to do that).
I think that this is insufficiently appreciated outside of the drug business. Nothing goes away unless it's well and truly superseded. Aspirin is still with is. Ibuprofen still sells like crazy. Blood pressure medicines are, in many cases, cheap as dirt, and the later types are inexorably headed that way. Every single drug that we discover is headed that way; patents are wasting assets, even patents on biologics, although those have been wasting more slowly (with the pace set to pick up). As this paper points out, very few other industries have this problem, or to this degree. (Even the entertainment industry, whose past productions do form a back catalog, has the desire for novelty on its side). But we're in the position of someone trying to come up with a better comb.
More on their other reasons in the next posts - there are some particularly good topics in there, and I don't want to mix everything together. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History | Drug Prices
March 7, 2012
You'd think that 8 billion dollars would be enough to get some attention. But that's how much drugmakers have paid in fines in recent years, and the regulatory agencies are wondering if anything's changing. This USA Today article has the details.
The fines, as many readers here will know, are for a range of offenses - Medicare reimbursements, off-label marketing, kickbacks from the sales force, and so on. And as things stand now, the government has really only two options when it comes time to lower the boom: fines, and the threat to remove the company from eligibility to sell via Medicare. But that second one is really sort of an empty threat, since most large companies are (for now!) selling drugs that are quite valuable to the Medicare patient population. So new techniques are being sought:
To try to change that trend, the government announced in 2010 that, rather than exclude an entire company, investigators would go after individuals within a company. [Gregory Demske at HHS] said his organization, the Justice Department and the Food and Drug Administration have come up with some ideas to use within the scope of the rules — such as taking away a company's patent rights as a condition of a settlement. That could begin with cases being investigated now, he said.
Now, that might get some attention, for sure. We'll see if it happens, because you can expect the industry to fight this as hard as possible. To that end, the article notes that $200 million was spent on lobbying last year by the drug and medical device industries. One first thought might be "Two hundred million! That's a lot of money!", but mine was "Two hundred million! Why, that's nothing compared to the issues involved. . ." Marginal Revolution has had some posts about lobbying and money in politics, mostly wondering why there isn't more of it than there is. With that kind of bang-for-the-buck, I wonder the same thing.
+ TrackBacks (0) | Category: Regulatory Affairs | The Dark Side
I have a reader who's in the process of moving from an industrial setting to teaching medicinal chemistry. He wanted to know if I'd ever written about that topic, and I have to say, I don't think there's been a post dedicated to it yet. I know that many people have done just this (and there are many more who are thinking about it).
So let's talk - are there are others out there who've made the switch? What are some of the things to look out for? I know that this answer will vary, depending on the job and the type of academia, but it'll be worthwhile hearing some first-hand experiences. Anything from dealing with funding, to integrating your industry experience into your teaching, to the whole culture shift - comment away, and thanks!
+ TrackBacks (0) | Category: Academia (vs. Industry)
March 6, 2012
There's a good post over at the Curious Wavefunction on the differences between drug discovery and the more rigorous sciences. I particularly liked this line:
The goal of many physicists was, and still is, to find three laws that account for at least 99% of the universe. But the situation in drug discovery is more akin to the situation in finance described by the physicist-turned-financial modeler Emanuel Derman; we drug hunters would consider ourselves lucky to find 99 laws that describe 3% of the drug discovery universe.
That's one of the things that you get used to in this field, but when you step back, it's remarkable: so much of what we do remains relentlessly empirical. I don't just mean finding a hit in a screening assay. It goes all the way through the process, and the further you go, the more empirical it gets. Cell assays surprise you compared to enzyme preps, and animals are a totally different thing than cells. Human clinical trials are the ultimate in empirical data-gathering: there's no other way to see if a drug is truly safe (or effective) in humans other than giving it to a whole big group of humans. We do all sorts of assays to avoid getting to that stage, or to feel more confident when we're about to make it there, but there's no substituted for actually doing it.
There's a large point about reductionism to be made, too:
Part of the reason drug discovery can be challenging to physicists is because they are steeped in a culture of reductionism. Reductionism is the great legacy of twentieth-century physics, but while it worked spectacularly well for particle physics it doesn't quite work for drug design. A physicist may see the human body or even a protein-drug system as a complex machine whose understandings we can completely understand once we break it down into its constituent parts. But the chemical and biological systems that drug discoverers deal with are classic examples of emergent phenomena. A network of proteins displays properties that are not obvious from the behavior of the individual proteins. . .Reductionism certainly doesn't work in drug discovery in practice since the systems are so horrendously complicated, but it may not even work in principle.
And there we have one of the big underlying issues that needs to be faced by the hardware engineers, software programmers, and others who come in asking why we can't be as productive as they are. There's not a lot of algorithmic compressibility in this business. Whether they know it or not, many other scientists and engineers are living in worlds where they're used to it being there when they need it. But you won't find much here.
+ TrackBacks (0) | Category: Drug Assays | Drug Development
+ TrackBacks (0) | Category: Chemical News
March 5, 2012
When you file a patent application, there are plenty of things that the PTO wants you to include. One of the big ones is prior art: you're supposed to disclose all the relevant inventions close to yours that you're aware of, in order to show how your discovery is different. Prior art is naturally to be found in other previous patent filings, and it's also to be found in journal articles and other such public disclosures. If you don't submit relevant prior art that is known to you, your patent application gets into a lot of trouble eventually (and the more worthwhile your invention, the greater the chance becomes of that catching up with you).
So in light of this, you might find it interesting that some of the large scientific publishers are suing over all this. Why? Well, these lawsuits (filed by Wiley and by the American Institute of Physics) allege that the accused law firms violated copyright by submitting unauthorized copies of journal articles with their patent applications.
As that post at PatentlyO goes on to show, the plaintiffs seem to also be very interested in the internal copies of articles that the law firms are making. But I don't really see how they're going to make either of these stick. I mean, I tend to think that a lot of things are "fair use", but aren't these? This really looks like an act of desperation - the traditional scientific publishing model must be in even worse shape than I thought.
+ TrackBacks (0) | Category: Patents and IP | The Scientific Literature
There have been all kinds of boronic acid-based enzyme inhibitors over the years, but they've been mostly locked in the spacious closet labeled "tool compounds". That's as opposed to drugs. After all these years, Velcade is still the only marketed boron-containing drug that I know of.
There's been a good attempt to change that in antibacterials, with the development of what's commonly referred to as "GSK '052", short for GSK2251052. That's a compound that originally came from Anacor about ten years ago, then was picked up by GlaxoSmithKline, and it's an oxaborole heterocycle that inhibits leucyl tRNA synthetase. (Here's a review on that whole idea, if you're interested).
Unfortunately, last month came word that the Phase II trial of the drug had been suspended. All that anyone's saying is that there's a "microbiological finding", which isn't too informative when it's applied to y'know, an antibacterial trial. (At least it doesn't sound like a general safety or tox problem, at any rate).
Anacor is continuing to exploit boron-containing compounds, although opinion looks divided about their prospects. I always have a sneaking fondness for odd compounds and elements, though, so I'd have to root for them just on that basis.
+ TrackBacks (0) | Category: Infectious Diseases | Odd Elements in Drugs
March 2, 2012
The large number of comments on yesterday's post on Sanofi CEO Chris Viehbacher's relentless candid interview included a response from someone at the company itself. At least, I have to assume that it is indeed Jack Cox, Senior Director of Public Affairs and Media Relations (as his LinkedIn profile has it), since the name and position match up, and the IP address of the comment resolves to Sanofi-US. I wanted to highlight his response - in the interest of fairness - and the responses to it, without having everything buried in the triple-digit comments thread to the previous post. Says Mr. Cox:
Anyone who has followed Chris in recent months will have heard some variation of these comments, but within the broader context that unfortunately didn't make it into the Q&A you reference.
Chris has consistently said that his vision for Sanofi's R&D organization is one of open collaboration, in which our own researchers increasingly partner with external teams. This is consistent with a comment you've included: "We're not going to get out of research. We believe we do things will in research but we want to work with more outside companies, startup biotechs, with universities."
In an interview with Luke Timmerman published by Xconomy in January Chris explained how this is working in practice:
"In Cambridge, you've got all those things. Being the No. 1 life sciences employer in Boston is great, but we didn't want to just do the same thing we did everywhere else, having everybody inside our walls. So we created this concept of a hub. There's a core, with a lot of competencies that a big organization can bring, but the idea of a hub is that we can manage the relationships we have with everybody from Dana-Farber Cancer Institute to Harvard to MIT to the Joslin Diabetes Center to some of the biotechs we work with. And we put our own oncology research team in Cambridge. There's a whole ecosystem in Boston, and we feel integrated and at the center of it."
Seeking external expertise, particularly when it concerns emerging technologies, contributes to the creativity and innovation we have within. The key to our approach, however, is that we don't want to simply be investors, but true partners. Again, consider the broader context as shared with Luke:
"The Warp Drive Bio project is interesting because it demonstrates where we want to go. It was very much on the basis of saying we want to work with (Harvard University chemical biologist) Greg Verdine. Someone like that isn't going to come work for Big Pharma, but we liked the science he was doing. We have a strong interest and expertise in natural products, and he had a genomics screening tool.
We will contribute expertise. I don't want to be a venture capitalist, or have a venture fund, like some other companies do. But I want to actually partner, where we bring some of what we know, and combine it with what Warp Drive has. The fact that we are trying to bring people from Sanofi into the collaboration, at such an early stage of research, is unusual. The single factor for success will be whether you can take a company like Warp Drive, with a handful of people, and make it work with an organization of 110,000 people without smothering it."
I believe your readers will agree that in this case the context really matters. Relying on one incomplete source doesn't do justice to the overall approach Chris has been describing.
If you want to truly understand the vision Chris has for Sanofi's research organization, I invite you to catch one of his public speaking engagements in the Boston area.
One has to wonder if the main difference between the two interviews was that Viehbacher spent more time considering his replies to Xconomy. I take it that since there's been no attempt to deny the earlier quotes in MedCityNews, that they're authentic. And the problem is, even some of his less popular statements in that interview are not false. It really is harder to innovate in a big company compared to a smaller one, for example. But while not false, they're also not the sort of thing one would expect the CEO of a major drug company to just blurt out, either, especially considering the likely effects of such statements on his own company's morale. I believe, in fact, that some current and (recently) ex-Sanofi employees have comments to make on that issue.
+ TrackBacks (0) | Category: Business and Markets | Life in the Drug Labs
I (and many of the readers here) have long thought that stem cells are perhaps the most overhyped medical technology out there - at least for now. I definitely agree that the possibilities for their use are staggering, and I very much hope that some of these pan out, but the gap between those possibilities and the current reality is just as huge. And it's a gap that really shows how hard medical progress is compared to how hard it is in the public imagination.
Nature has an article that bears on this, and on some other important topics. They've found that stem cell treatments are being sold to patients in Texas.
(The investigation) suggests that (Celltex Therapeutics) has supplied adult stem cells to Texas doctors who offer unproven treatments to patients, and that the company is involved in these treatments. One doctor claims that the treatments are part of a clinical study run by Celltex and that the company pays him US$500 a time to inject the cells into patients, who are charged up to $25,000 for a course. The US Food and Drug Administration (FDA) considers it to be a crime to inject unapproved adult stem cells into patients. David Eller, chief executive of Celltex, denies that the company is involved in treatment procedures, but would not comment on Nature's findings about how its cells are used or answer questions about them.
This makes me wonder about what is going on down there in Texas (and I can tell you, as an Arkansan, I'm willing to believe just about anything in that department). This latest business reminds me of the Burzynski cancer treatment stuff, in the way that definitions of "clinical trial" are stretched like rubber bands. Personally, I think that clinical trials are supposed to follow something very much like Yog's Law in publishing ("Money flows towards the writer"). If you're being asked to put up all kinds of money to get your book edited and published, you're very likely being scammed. And if you're being asked to pay thousands of dollars to be in a "clinical trial", well. . .you're being sold something. Real clinical trials reimburse their patients for time and effort, with money and/or medical care. They do not bill them for 25 long ones at the end of the dosing schedule
I should mention here that Slate also had an article up on Celltex, but there have been some problems. They've taken the piece down, citing editorial problems, but (as you'd figure), the cherchez le lawsuit rule applies here. Nature, though, doesn't seem to be getting sued for what they've written.
Now, back to the stem cell treatments. Among other things, Nature mentions a blog by a woman in Texas, who's written about her experiences being treated with adult stem cells from Celltex. It appears that she's receiving these treatments for multiple sclerosis, and was told that "This method has been successful with auto immune diseases such as Parkinson’s, arthritis, Multiple Sclerosis as well as others." She had apparently had a similar procedure done earlier in Mexico, but then:
". . .a friend told Larry about a doctor in Houston who went to South Korea two years ago for a stem cell transplant to treat the debilitating effects of psoriatic arthritis. He is now able to continue his medical practice, perform surgeries, and live without pain. Because our friends had noticed progress from my first stem cell transplant, they wanted us to know that Dr. Jones was now licensed to perform the procedure in Houston. To say the least, we were both excited about the possibilities and timing."
As that extract illustrates, at no point (that I have found) does this patient mention the phrase "clinical trial". One gets the strong impression, actually, that she believes that she is paying to undergo a new medical procedure, the latest thing, rather than participating in any kind of investigational study for a therapy that has not yet been reviewed by the FDA. The Nature writer, David Cyranoski, was able to speak with the physician involved, who says he's treated a number of people with cells from Celltex:
Lotfi says that most of his patients claim to get better after the treatment, but he admits that there is no scientific evidence that the cells are effective. “The scientific mind is not convinced by anecdotal evidence,” he acknowledges. “You need a controlled, double-blind study. But for many treatments, that's not possible. It would take years, and some patients don't have years.”
“The worst-case scenario is that it won't work,” he adds. “But it could be a panacea, from cosmetics to cancer.” He says that Celltex is conducting a trial in which patients “will be their own control”. “If you can compare before and after and show improvement, there's no need for a placebo,” he explains. “How can you charge people, and then give them a placebo?”
Indeed! Maybe you could try not charging them, and not making them spend their own money to find out whether your treatment is any good. Maybe you could get a large, statistically significant number of people together, who've been given thorough diagnostic workups, and give half of them the best standard of care for multiple sclerosis and half of them the stem cell treatment - at your expense - and see if they get better. How about that? (Oh, and just a little note - the worst case is not that nothing happens at all. It might be good for the people involved to think about that a bit).
This gets back to the discussions we've had around here about rethinking clinical trials. One of the things I'll say for the FDA is that they do force people to be rigorous, and to put new medical ideas to well-controlled tests. My worry about the "sell, then test" ideas was summed up in the first link in this paragraph: "I fear that there are any number of entrepreneurial types who would gladly stretch things out, as long as someone else is paying, in the hopes of finally seeing something useful. No one will - or should - pay for extending fishing expeditions." Read that Celltex article and see if that sounds familiar.
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs | The Central Nervous System | The Dark Side
March 1, 2012
In case you're a scientist, and especially if you're a scientist at Sanofi, their CEO Chris Viehbacher would like you to know some things. What things are those, you ask? Well, how about your position in the world, and especially your position at Sanofi itself?
"What Sanofi is doing is reducing its own internal research capacity. The days when we locked all of our scientists up in a building and put them on a nice tree-lined campus are done. We will do less of our own research. We’re not going to get out of research. We believe we do certain things well in research but we want to work with more outside companies, startup biotechs, with universities."
You know, people with real ideas, innovative stuff, that kind of thing. When asked if this was cheaper, Viehbacher replied:
"It is cheaper. But research and development is either a huge waste of money or too, too valuable. It’s not really anything in between. You don’t really do things because it’s cheaper. The reality is the best people who have great ideas in science don’t want to work for a big company. They want to create their own company. So, in other words, if you want to work with the best people, you’re going to have go outside your own company and work with those people … And, you want to work with them, why do they want to work with you? The reality over the last 10 years is, (a small biotech) wouldn’t get caught dead working with one of these big cumbersome pharma companies. Once you have a funding gap, suddenly there’s a much greater willingness of earlier-stage companies to work with Big Pharma. We’re looking earlier and people who are early need help.
So, if you're one of Sanofi's dwindling number of internal scientists, at least now you know what you're being treated the way you are. It's because you're, well, you're not the sharpest tool in the shed. If your company really wants something to happen, they'll need to bypass you and find someone good. Sticking you in a nice building and telling you to discover stuff hasn't worked out, clearly, and blame must be attached somewhere. Right?
At least Viehbacher has enough self-knowledge to know what people outside his company thinks of it (and its ilk). But hey, now that the people who can actually discover things are desperate, opportunity knocks! This is a business plan known as "So, you need a deal real bad? Well, here's a really bad deal!" And it's the sort of arrangement that just makes everyone happy all around. When asked about working with venture capital firms (as Sanofi recently did with the unfortunately named Warp Drive Bio), the response was:
"There’s two reasons I like (working with venture capital firms). One is, they can sometimes bring competencies we don’t have, like for instance in how to help a startup company. The second thing is to give you a second opinion. Somebody in your company is going to love the science and be championing this internally. But you want to have a second opinion. If you have a venture capital company that’s willing to put money in, that kind of gives a little validation of that."
Those people in his own company again! Nothing but trouble. You wonder, though, what happens when someone inside Sanofi thinks that some hot startup deal might not be a good idea. I wonder if everyone was in love with Warp Drive Bio, for example? No matter - a VC firm was willing to put actual money into the thing, so that's pretty much all the validation anyone needs. Investors in the public markets, though, are apparently fools, because they think that because a big pharma company is interested, that means that a small company might have something going for it:
"The new model, where we’re trying to go, we believe that Big Pharma has competencies in validation. So, if a Big Pharma company does a deal with a smaller company, the smaller company’s share price goes up because people believe that Big Pharma has depth of competencies to judge whether this science is any good or not. Now big companies, and not just Big Pharma, big companies I believe, are not any good at doing innovation. There has to be some element of disruptive thinking to have innovation and I can tell you that big companies do everything to avoid any disruptive thinking in their companies."
Hah! The investors should read Viehbacher's interview, and realize that the sort of scientists who work inside a big company like his wouldn't know an innovation if it slithered up their leg.
Now, there are points to be made about large organizations, and about disruptive thinking, and about various models for drug discovery and for funding ideas. But you know, at the moment, I'm too disgusted to make them.
Update: comments have been disabled now, due to the large volume of them and the follow-up post. Any thoughts can be directed over there - thanks!
+ TrackBacks (0) | Category: Business and Markets | Life in the Drug Labs