About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
June 30, 2011
Pfizer now says that it's not going to completely close the Sandwich research site in the UK. 350 people will remain - which isn't too many compared to the fully staffed number (well over 2,000), but a lot better than zero. Between that and the attempt to make the site an enterprise zone, perhaps something can be salvaged. But the local economy is, as you'd expect, feeling the effects.
It's too early to say if this is an example of a drug company that feels as if it's outsourced enough and can stop now - let's watch the news over the next few months and see. . .
+ TrackBacks (0) | Category: Business and Markets
Well, here's one from the Archives of Internal Medicine that most certainly did get published. It's an analysis of an old clinical trial, STEPS, which was conducted for Neurontin (gabapentin) during the 1990s.
But that's not quite right. The authors find, by analyzing a large trove of documents released during lawsuit discovery proceedings, that STEPS was not really intended to be a clinical trial. Instead, it was a marketing program:
Documents demonstrated that STEPS was a seeding trial posing as a legitimate scientific study. Documents consistently described the trial itself, not trial results, to be a marketing tactic in the company's marketing plans. Documents demonstrated that at least 2 external sources questioned the validity of the study before execution, and that data quality during the study was often compromised. Furthermore, documents described company analyses examining the impact of participating as a STEPS investigator on rates and dosages of gabapentin prescribing, finding a positive association. None of these findings were reported in 2 published articles.
Here's more at Medscape. STEPS was allegedly a Phase IV post-approval trial, but it was unblinded and pretty much uncontrolled. Instead of taking place at a small number of centers, it seems to have been set up to enroll as many physicians as possible (they ended up with 772!), with each of them bringing in a handful of patients.
This is an extremely foul technique, which brings the companies who use it, the entire drug industry, and the whole idea of clinical research into disrepute. For money. I feel like spitting on the floor.
+ TrackBacks (0) | Category: Clinical Trials | The Dark Side
I couldn't resist mentioning this one: the Archives of Internal Medicine was set to publish a paper showing a benefit for transcendental meditation in heart attack and stroke. Word was already out in the press - in the UK, the Telegraph had already published a story, with a quote from one of the paper's lead authors (from, ahem, the Maharishi University of Management) that the effect seen was as great or greater than any pharmaceutical intervention.
I don't have a link up to that particular newspaper report; its URL is no longer valid. That's because twelve minutes before the paper was set to be published online, the journal pulled it. (Other sources still have their stories up). We still don't know quite what the problem was. Nature got this statement:
“It became apparent that there was additional data not included in the manuscript that was about to be published, and the editor of Archives thought that the information was significant enough that it needed to be included as part of the paper, and then re-analyzed and verified, so she made the last-minute decision not to publish it. . .It’s an unusual situation, but the bottom line is that our journal wants to make sure that the information we put out is as accurate as can be.”
I'm glad to hear it. Larry Husten at Forbes has the data from the paper, and has a lot of questions. We'll see how things look when (and if) it ever appears. But for now, if you're looking for the latest anyone has ever pulled a paper before publication, we may well have the record.
Update: here's an excellent report on this at Retraction Watch.
+ TrackBacks (0) | Category: Snake Oil | The Scientific Literature
June 29, 2011
Today is Day Two of the FDA's hearings on Avastin for metastatic breast cancer. Note: if you want to follow things in near real time, I'd suggest a Twitter search for #Avastin. I can particularly recommend Len Lichtenfeld's feed. This has been a very contentious issue - as most of you know, Avastin was provisionally approved for these patients, then pulled when more trial data came in showing no benefit. Roche/Genentech's team is now appealing that decision, and the questions are:
1. Should Avastin be approved for metastatic breast cancer patients? The answer to this one is "depends on the evidence for it". So. . .
2. Is there enough evidence to decide one way or another? Both the FDA and Roche seem to think that there is. The problem's that they come to opposite conclusions. So. . .
3. What's the risk/benefit ratio for Avastin in these patients? Now the serious arguing starts. Avastin is not without its serious side effects - but metastatic breast cancer is a terrible disease. The initial reports were promising - but none of the larger follow-up trials have really confirmed those results. Genentech is proposing still another confirmatory trial, with the drug to stay approved during that period, but the FDA seems to be arguing that leaving the drug approved for this indication will hurt more people than it helps. There you have it.
And all of this is being done against a backdrop of emotional cancer patient testimony. The problem with that is summed up by one of the most fervent advocates, Patricia Howard, who told the FDA "I’m not just a statistic; it is in your hands to ensure that I don’t become one."
She is wrong. It pains me to say this, but she's wrong. If we're ever going to get anywhere with cancer (or any other disease), we're going to need all the statistics we can get our hands on, and no amount of passionate testimony should be allowed to move one number in them. I've had family members with cancer; I've seen good friends and plenty of good people die from cancer. But cancer cells do not care about how strong your feelings are. The growth factor receptors, the checkpoint kinases, the apoptosis regulators, the metabolic enzymes and cell adhesion proteins: they don't give a damn. They have no damn to give. We have to fight them on those terms, on that battlefield, because that's the only one that matters and the only one where they can be defeated.
As it stands, I agree with the FDA's position: I don't think that Avastin has been shown to offer enough benefit. The 2008 provisional approval was already arguable - the agency went against its own advisory committee just to do that much - and the subsequent data have made it even less tenable. If we're going to have provisional approvals, then they have to be able to be taken back. And if we're going to evaluate drugs by their risks versus their benefits, then Avastin - for this indication, in these patients - doesn't (to my eyes) seem to make the cut.
If, on the other hand, you disagree with the provisional approval process, fine. Propose something more useful. If you disagree with the risk/benefit analysis in this case, then you should bring some new numbers or some new arguments (which is what Genentech is trying to do right now, as I write this, and I hope that they don't slip over the line while doing it). If you disagree with the whole idea of risk/benefit analysis, then. . .well, you'd better have something more useful to offer. And you'd better be sure that it doesn't end with the decisions going to whoever is the most passionate and tearful in making their case. That won't end well.
One more side issue: you'll note that I've done this whole blog post without talking about the price of Avastin at all. That's because I don't think that the price is the issue at all here. This is not a health-care-rationing issue, no matter how much some people would like for it to be. Roche gets to charge what they think Avastin can bring - they and Genentech have put the time, effort, and money into the drug. But for metastatic breast cancer, as I said here, Avastin doesn't seem like a good idea even if it were free.
+ TrackBacks (0) | Category: Cancer | Regulatory Affairs
. . .you either have to go to the specialty press, or (sometimes) to the last couple of paragraphs of a mainstream article. For several years now, it's been hard to think of any medical field that's been more relentlessly overhyped than stem cell therapy (a worst-case example was its appearance in the 2004 elections, courtesy of the ever-reliable John Edwards?).
FiercePharma has a good short look at an article in Time that is much more well-balanced than most, but still has some of the usual problems. And don't get me wrong - I think that stem cells are an exciting area of research, an excellent thing to be investigating, and could quite possibly lead to some wonderful results. But not next week. And not without a few billion dollars, most likely. Anyone who tells you otherwise is, to my mind, to be regarded with suspicion.
+ TrackBacks (0) | Category: Press Coverage
June 28, 2011
I hate to be such a shining beacon of happiness today, but this news can't very well be ignored, can it? For the first time ever, total drug R&D spending seems to have declined:
The global drug industry cut its research spending for the first time ever in 2010, after decades of relentless increases, and the pace of decline looks set to quicken this year.
Overall expenditure on discovering and developing new medicines amounted to an estimated $68 billion last year, down nearly 3 percent on the $70 billion spent in both 2008 and 2009, according to Thomson Reuters data released on Monday.
The fall reflects a growing disillusionment with poor returns on pharmaceutical R&D. Disappointing research productivity is arguably the biggest single factor behind the declining valuations of the sector over the past decade.
This is not good - although, to be sure, we've had plenty of warning that this day would be coming. But looking at it from another perspective, you might wonder what's taken so long. Matthew Herper has a piece up highlighting the chart below, from the Boston Consulting Group. It plots new drugs versus R&D spending in constant dollars, and if you're wondering what the Good Old Days looked like, here they are. Or were:
What's most intriguing to me about this graph is the way it seems to validate the "low-hanging fruit" argument. This looks like the course of an industry that has, from the very beginning of its modern era, been finding it steadily, relentlessly harder to mine the ore that it runs on. But that analogy leaves out another key factor that makes that line go down: good drugs don't go away. They just go generic, and get cheaper than ever. You can also interpret this graph as showing the gradual buildup of cheap, effective generics for a number of major conditions (cardiovascular, in particular).
There's one other factor that ties in with those thoughts - the therapeutic areas that we've been able to address. Look at that spike in the 1990s, labeled PDUFA and HIV. Part of that jump is, as a colleague theorized with me just this morning, the fact that a completely new disease appeared. And it was one that, in the end, we could do something about - as opposed to, say, Alzheimer's. So if you want to be completely evil about it, then the Huey Lewis model of fixing pharma has it wrong: we don't need a new drug. We need a new disease. Or several.
Well, that's clearly not the way to look at it. I don't actually think that we need to add to the list of human ailments; it's long enough already. But given all the factors listed (and the ever-tightening regulatory/safety environment, on top of them), another colleague of mine looked at this chart and asked if we ever could have expected it to look any different. Could that line go anywhere else but down? The promise of things like the genomics frenzy was, I think, that it would turn things around (and that hope still lives on in the heart of Francis Collins), even though some people argue that it did the reverse.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
Over at Forbes, Matthew Herper has a provocative comment from a former Merck executive, Peter DeVillbiss. He's wondering when and how drug companies lost the public standing that they used to have (remember, Merck used to be the "most admired company in America", just to give you one example). His theory (which I know is shared by some of the readership here) is that direct-to-consumer advertising was a terrible mistake, bringing in lots of profits while ruining the reputation of the drug companies. His thought experiment:
If there was a regulatory mandate for all pharma companies to cease direct-to-consumer advertising for prescription drugs and vaccines, what would happen? It is not clear to me that this would be a death knell for the industry. I think it’s reasonable to assume that revenues would fall, but the big question is whether costs would fall more? This could never happen on a voluntary basis because of game theory but if it were mandated and applied across the board, I’m not so sure that pharma wouldn’t be better off in a few ways.
Check out the post, and the comments that it's inspired. I'll get a few points out of the way - for one thing, DTC advertising has, in fact, probably enriched the drug industry a great deal. No one's claiming that it's been a money sink, just a reputational disaster. Another thing to remember is that advertising budgets are supposed to bring in more money to the company than you'd have if you didn't run ads - that is, they're supposed to pay for themselves and plenty more besides. So if we can skip the "Pharma spends more on ads than R&D!" part of the argument, that'll be fine. Ads make money; I'd rather focus on what else they do. Thoughts?
+ TrackBacks (0) | Category: Why Everyone Loves Us
June 27, 2011
Adam Feuerstein calls this not just "post hoc data mining", but "extreme post hoc data mining". Take a look and see what you think.
Update: more here.
+ TrackBacks (0) | Category: Clinical Trials
Here's a paper in PNAS that says that we're probably treating infectious disease the wrong way - and perhaps cancer as well. The authors go over the currently accepted doctrines: multiple-mechanism therapies, when possible, and restricted use to patients who really need antibiotics. But there's a third assumption that they say is causing trouble:
A third practice thought to be an effective resistant management strategy is the use of drugs to clear all target pathogens from a patient as fast as possible. We hereafter refer to this practice as “radical pathogen cure.” For a wide variety of infectious diseases, recommended drug doses, interdose intervals, and treatment durations (which together constitute “patient treatment regimens”) are designed to achieve complete pathogen elimination as fast as possible. This is often the basis for physicians exhorting their patients to ﬁnish a drug course long after they feel better (long-course chemotherapy). Our claim is that aggressive chemotherapy cannot be assumed to be an effective resistance management strategy a priori. This is because radical pathogen cure necessarily confers the strongest possible evolutionary advantage on the very pathogens that cause drugs to fail.
The harder you hit a population of infectious disease organisms, the harder you're selecting for resistance. The key, they say, is that in many cases there's genetic diversity among these organisms even inside single patients. So you can start off with a population of bacteria, say, that could be managed by less aggressive therapy and the patient's own immune system. But then aggressive treatment ends up killing off the great majority of the bacterial population, which you'd think would be a step forward. But what you're left with are the genotypes that are hardest to kill with antibiotics. They were in a minority, and might well have died out under competition from their less-genetically-burdened cohorts. But killing those off gives the resistant organisms an open field to work in.
The other problem here is a public-heath one. You want to cure the individual patient, and you want to keep their disease from spreading, and you want to keep from encouraging resistance among the infectious organisms. Optimizing for all three at once is probably not possible.
The paper goes into detail with the example of malaria, pointing out that it may well be the norm for people to be infected with several different lineages of malaria parasites at the same time. They seem to be in there competing for nutrients and for red blood cells, and some of them appear to be keeping the others in check. Antimalarial drugs alter the cost/benefit ratio (for the parasites) of carrying resistance genes.
So what should we do? The problem is, they say, that there are probably no general rules that can be recommended:
Thus, aggressive chemotherapy is a double-edged sword for resistance management. It can reduce the chances of high-level resistance arising de novo in an infection. But when an infection does contain resistant parasites, either from de novo mutation or acquired by transmission from other hosts, it gives those parasites the greatest possible evolutionary advantage both within individual hosts and in the population as a whole. How do the opposing evolutionary pressures generated by radical cure combine in different circumstances to determine the useful life span of a drug? There will be circumstances when overwhelming chemical force retards evolution and other times when it drives things very rapidly. We contend that for no infectious disease do we have sufﬁcient theory and empiricism to determine which outcome is more important. It seems unlikely that any general rule will apply even for a single disease, let alone across disease systems.
For more on such ideas as applied to bacterial infections, see here and here. But near the end of this paper, the authors apply similar reasoning to cancer. (That analogy has come up around here before, I should note).
An analogous situation also occurs in cancer therapy, where cell lineages within a tumor compete for access to space and nutrients. There, the argument has recently been made that less aggressive chemotherapy might sustain life better than overwhelming drug treatment, which simply removes the competitively more able susceptible cell lineages, allowing drug-resistant lineages to kill the host. Mouse experiments support this: Conventionally treated mice died of drug-resistant tumors, but less aggressively treated mice survived (95).
So maybe too many of us have been thinking about these questions the wrong way. If we switch over to favoring whatever strategy minimizes resistance, both in individual patients and thus across the population, we could be in better shape. . .
+ TrackBacks (0) | Category: Cancer | Infectious Diseases
June 24, 2011
Here's a op-ed from Josh Bloom (ex-Wyeth) in the New York Post that will resonate with a lot of people out there. A sample:
The folks at Scientific American have launched "1,000 Scientists in 1,000 Days" -- a program to bring together scientists, teachers and students to improve America's "dismal" showing among wealthy countries (27th out of 29) in graduating college students with degrees in science or engineering. I'm sure they mean well -- but, at least as it applies to the field of chemistry, "1,000 Unemployed Scientists Living With Their Parents at Age 35 While Working at the Gap" would be a better name.
He goes on to tell the readership what it's been like in drug discovery over the last few years, and it'll probably be news to many of them. I'm glad that people are getting the word out!
+ TrackBacks (0) | Category: Press Coverage
There are plenty of headlines about the recent Supreme Court decision (PDF) on suing generic drug manufacturers. But this is not so much about generic drugs, or suing people, as it is about the boundaries between state and federal law. That, actually, is why the case made this far - that's just the sort of issue the Supreme Court is supposed to untangle. Readers may decide for themselves whether such distentangling has actually occurred.
Reglan (metoclopramide) is the drug involved here. It's been generic for many years, and for many years it's also been known to be associated with a severe CNS side effect, tardive dyskinesia. This is the same involuntary-movement condition brought on by many earlier antipsychotic medications, and it's bad news indeed. The labeling for the product has been revised several times by the FDA over the years.
In this case, the plaintiffs were prescribed metoclopramide in 2001 and 2002, and their claim was that the generic manufacturers are at fault under state tort law (in these cases, Minnesota and Louisiana). It should be noted at this point that the package insert for the drug warned at the time that tardive dyskinesia could develop, and that treatment for more than 12 weeks had not been evaluated. In 2004 and 2009, the labe was strengthened to warn that treatment beyond twelve weeks should only be undertaken in rare cases. The plaintiffs both took metoclopramide for years, although this was not at issue in this case as it was brought.
What's at issue is the drug label and how it's regulated. The plaintiffs claimed that state law required a stronger safety warning than did federal law at the time, and that they thus have standing to sue. On the other hand, you have the whole process of generic drug approval. A generic company has to show that its product is equivalent to the original drug, and it then uses the exact same label information. Under federal law, the generic companies claim, they have no authority to independently change the labeling of their products.
The plaintiffs (and their lawyers) countered this argument by claiming that there were still mechanisms ( the CBE (changes-being-effected) process and "Dear Doctor" letters) by which the manufacturers could have changed the safety warnings on their own. The FDA, however, disputes that, and the Supreme Court deferred to the agency, saying that this is not an obviously mistaken position and there is no reason to doubt that it represents the FDA's best judgment in the matter.
That disposed of, the question comes back to federal law versus state. And in direct conflicts of that sort, state law has to yield, according to Justice Thomas for the majority:
The Court finds impossibility here. If the Manufacturers had independently changed their labels to satisfy their state-law duty to attach a safer label to their generic metoclopramide, they would have violated the federal requirement that generic drug labels be the same as the corresponding brand-name drug labels. Thus, it was impossible for them to comply with both state and federal law. And even if they had fulfilled their federal duty to ask for FDA help in strengthening the corresponding brand-name label, assuming such a duty exists, they would not have satisfied their state tort-law duty. State law demanded a safer label; it did not require communication with the FDA about the possibility of a safer label.
And that last sentence is where Justice Sotomayor's dissent breaks in. The minority holds that the generic manufacturers only showed that they might have been unable to comply with both federal and state requirements, and that this isn't enough for an impossibility defense. Sotomayor's dissent agrees, though, that the FDA does not allow the generic companies to unilaterally change their labels. But she says that this does not mean that they just have to sit there. Instead of just making sure that their labels match the brand-name labeling, she says, they likely have a responsibility to ask the FDA to consider label changes when necessary, and this wasn't done in this case. And even if you take the position that they don't have to do so, they still can do so, making the impossibility defense invalid.
This is explicitly addressed in the majority opinion - saying, in so many words, that this is a fair argument, but that they reject it. On what grounds? That it would actually
". . .render conflict pre-emption largely meaningless because it would make most conflicts between state and federal law illusory. We can often imagine that a third party or the Federal Government might do something that makes it lawful for a private party to accomplish under federal law what state law requires of it. In these cases, it is certainly possible that, had the Manufacturers asked the FDA for help, they might have eventually been able to strengthen their warning label. Of course, it is also possible that the Manufacturers could have convinced the FDA to reinterpret its regulations in a manner that would have opened the CBE process to them. Following Mensing and Demahy’s argument to its logical conclusion, it is also possible that, by asking, the Manufacturers could have persuaded the FDA to rewrite its generic drug regulations entirely or talked Congress into amending the Hatch-Waxman Amendments."
The "supremacy clause" in the Constitution, the majority says, clearly treats pre-emption conflicts as real problems, and therefore any line of argument that just makes them go away is therefore invalid. At about this point in the majority opinion, Justice Kennedy bails out, though. Thomas and the remaining three justices have a point to make about non obstante provisions that he does not join in - and since this is not exactly a legal blog, nor am I a lawyer (although that's easier for me to forget on morning like this one), I'm going to bypass this part of the dispute).
For those of you who are still with me, there's one more feature of interest in this case. Metoclopramide has already been the subject of an important lawsuit - in this case, going back to Wyeth, the original brand manufacturer. That's Conte v. Wyeth, which I wrote about here. The dispute in that case was not about labeling, it was over who was liable for the tardive dyskinesia in the first place. A court in California held that the originator of the drug was on the hook for that, no matter how long the compound had been generic, and the California Supreme Court refused to hear an appeal. That issue is not yet laid to rest, though, and we'll be hearing about it again.
Given these cases, though, let's say that someone takes metoclopramide and is affected by tardive dyskinesia. Who can they sue? Well, the way the labeling is now, if you take it for more than a few weeks, you're doing so at your own risk, and in the face of explicit warnings not to do so. If your physician told you to do so, you could presumably sue for malpractice.
And what about the whole labeling dispute? Well, the language of the majority decision, it seems to me, is basically a message to the FDA and the legislative branch. If you don't like this decision, it says, if it doesn't seem to make any sense, well, you have the power to do something about it. We've shown you what the law says now, and you know where to start working on it if you want it to say something else.
One more point: on the train in to work this morning, I heard the argument advanced that because of these cases, once a drug goes generic, that brand-name manufacturers will probably want to consider just exiting the market in the case of drugs that have significant warnings in their labels. That will put the whole pharmacovigilence burden on the generic companies - which they won't like, but someone's going to have to soak it up. We'll see if that happens. . .
+ TrackBacks (0) | Category: Regulatory Affairs | Toxicology
June 23, 2011
Multiple sclerosis therapy has been changing a lot in recent years, and one of the biggest events was the introduction of Gilenya (fingolimod). That's the first non-injectable for MS, and it's quite a story (as well as being quite a weird compound from a chemistry perspective).
Novartis has been racing ahead in selling that one, because they knew the Merck KgGa (Merck-Darmstadt) had another oral compound in the works, cladribine. That's a nucleoside analog with a different mechanism (targeting some lymphoctye subtypes and thus changing immune response), and it was already used in treatment of some forms of leukemia. It did show promising results in the clinic for relapsing MS, and there were high hopes.
Not now. Word has come that the company that they're withdrawing their application in Europe and the US, and taking the drug off the market in the only two countries (Russia and Australia) where it had been approved. The FDA had already said that it would not approve cladribine without more safety information, and Merck KgGa has decided that (1) the ongoing trials won't do the job, and (2) it's not worth it (risk/reward) to try new ones.
So that leaves the field open for Novartis, and German Merck (who have had several disappointments in recent years) in some trouble. . .
+ TrackBacks (0) | Category: Regulatory Affairs | The Central Nervous System
June 22, 2011
The NIH has, it appears, been getting quite sensitive about conflicts of interest. There have been some rather ugly scenes involving ghostwritten articles (and entire books), and NIH director Francis Collins has said that the agency's guidelines are in the process of being revised.
You'd have thought that the existing ones would have banned that sort of thing, anyway. And in fact, it seems as if many scientists at the NIH already find the rules too restrictive. From the original paper that looked into this:
Eighty percent of respondents believed the NIH ethics rules were too restrictive. Whereas 45% of respondents believed the rules positively impacted the public's trust in the NIH, 77% believed the rules hindered the NIH's ability to complete its mission.
The problem, as so often happens, is whether your goal is to look good or to do your job, and you don't want to solve that conflict by redefining your job as just to look good all the time.
The reason I'm talking about all this is that I've heard of instances where people from NIH have refused (or felt as if they have had to refuse) invitations to give talks in industrial settings, because they feared conflict-of-interest problems. This seems perverse, especially for an agency that's talking about getting heavily into translational drug research. That'll have to lead to numerous contacts with industry, I think, in order to be much good at all. So how will the NIH manage that if the drug industry is seen as contaminating their Purity of Essence?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Why Everyone Loves Us
June 21, 2011
Now, I try to help discover drugs for a living. And boy, do we not discover all that many of them. But you'd get a different impression if you listen to the radio here in the US. So many drugs! So many wonderful things that they can do! Improve your memory, boost your immune system, clean your liver, give you energy, grow hair on your head and flush those toxins out of you like a firehose.
Ah, but these aren't drugs, of course. They are nutritional supplements, silly people, and they are "not intended to treat, cure, or modify any disease". But they say that part low and fast, while the exciting parts are enunciated clearly, con brio, and at least three times. Drugs are foreign chemicals that you put in your body to make it do things, while nutritional supplements, why they're these all-natural. . .things. . .made out of, made out of. . .stuff. . .that you put in your body to make it do things. Anyway, they're different.
And here's the man who says so: Orrin Hatch, to whom
(along with Henry Waxman) we owe the Hatch -Waxman legislation that made the supplement industry flourish like the green bay tree. $25 billion a year isn't bad, especially when you consider that the expenses of the supplement companies are just a tiny bit lower than those of the drug companies. Not having to do any preclinical research at all helps, of course, and not having to run any clinical trials at all (nothing for efficacy, nothing for safety) helps, and not having to be reviewed by the FDA helps, too. And then on the other side of the ledger, being able to say any damn thing that comes into your head helps the most of all.
And as you'll see from that article, not only has Senator Hatch himself benefited greatly from his nutritional ties, but so has his family, immediate and extended. And his friends, and his former business partners - pretty much everyone within range, it seems. Each sides regards the other as the gift that keeps on giving. And why shouldn't they?
+ TrackBacks (0) | Category: Snake Oil
June 20, 2011
You hear a lot of talk about the "patent cliff" in the industry these days. Patent expirations you shall always have with you, but there are a number of big-selling drugs that are all coming out of patent protection in a fairly short period. The biggest single drug in this category is, of course, Lipitor, and that expiration has been looming up on Pfizer year after year.
But Eli Lilly has even worse problems: they're not losing their single biggest seller; they're losing up to 50% of all their sales. AstraZeneca's not in much better shape, it should be added. Jim Edwards at BNET goes into the numbers, courtesy of a Bernstein study. Here, from the analyst's work, is the estimate of "base" revenues (from currently existing drugs) normalized to 2010, (via Edwards and BNET):
Not too encouraging. And Lilly doesn't have enough coming online to offset this (who would?) If you read Edwards' post, you'll find a graph that attempts to show the same group of companies, with projected revenues for new drugs factored in as well. GSK and Novartis come out looking pretty good - AZN and LLY, well. . .have a look and see what you think.
What I found very interesting was the Bernstein analyst's comments on the plan that Pfizer's CEO Ian Read has been floating, to divest everything except the core drug business. That post took off from a piece by Matthew Herper at Forbes, who spoke with an analyst who was surprised at how serious Read seemed to be. The reason he was surprised is that this is the same analyst we're talking about - Tim Anderson of Bernstein. When he runs the numbers on a "core Pfizer" strategy, it actually makes things look even worse.
So Pfizer has options, but it had better think them through carefully. Lilly and AstraZeneca, on the other hand, seem as if their backs are inexorably being pushed to the wall. The only way out, as the BNET headline has it, would seem to be to acquire someone or be acquired in turn. It's hard to see how either company makes it through in their current state.
+ TrackBacks (0) | Category: Business and Markets
June 17, 2011
From the Financial Times, here's a look at our industry from a business perspective:
A big justification for the mergers that have consolidated the global pharma industry was that overhead costs would be cut, reducing the impact to profits of the patent-expiration wave. Has consolidation delivered on this promise?
Note that we're already seeing things from a different angle here than we're used to thinking about. From an investor's perspective, all this outsourcing/site closure upheaval is probably a good thing, because it cuts costs that apparently need to be cut. And the question is, has it done what it's supposed to do?
The FT editorial gives a "conditional yes" answer, but they worry that cutting costs is a tactic that's run about as far as it can, and that may not be far enough, financially:
Much overhead has already been removed, and expanding into emerging markets, essential for all the global pharmas, will cost money. Cost of goods and research and development expense ratios have mostly stayed put, and it is hard to see why that would change now. If the savings story is petering out, the industry needs revenue growth more than ever.
That we do, and where we're going to get it is the answer. If there's a bright side to all this, it might be that we're close to the end of the relentless cut-cut-cutting that's characterized this business in recent years. The dark side, though, is that one answer to "what's next?" is more mergers, since that's one way to get back to cutting costs. And let's face it - cutting costs, that's something that managers know how to do. Improving R&D productivity, well, not so much. Stick with what you know, eh?
+ TrackBacks (0) | Category: Business and Markets
June 16, 2011
We've talked quite a bit around here about academic (and nonindustrial) drug discovery, but those posts have mostly divided into two parts. There's the early-stage discovery work that really gets done in some places, and then there's the proposal for the big push into translational research by the NIH. That, broadly defined, is (a) the process of turning an interesting idea into a real drug target, or (b) turning an interesting compound into a real drug. One of the things that the recent survey of academic centers made clear, I'd say, is that the latter kind of work is hardly being done at all outside of industry. The former is a bit more common, but still suffers from the general academic bias: walking away too soon in order to move on to the next interesting thing. Both these translational processes involve a lot of laborious detail work, of the kind that does not mint fresh PhDs nor energize the post-docs.
But if there's funding to do it, it'll get done in some fashion, and we can expect to see a lot of people trying their hand at these things. Many universities are all for it, too, since they imagine that there will be some lucrative technology transfers waiting at the end of the process. (One of the remarkable things about the drug industry is how many people outside it see it as the place to get rich).
I had an e-mail from Jonathan Gitlin on this subject, who asks the question: if academia is going to do these things, what should they be doing to keep the money from being wasted? It's definitely worth thinking about, since there are so many drains for the money to go spiraling down. Mind you, most money spent on these things is (in the most immediate sense) wasted, since most ideas for drug targets turn out to be mistaken, and most compounds turn out not to be drugs. No matter what, we're going to have to be braced for that - even strong improvements in both those percentages would still leave us with what (to people with fresh eyes) would seem horrific failure rates.
And what I'd really like is for people to avoid the "translational research fallacy", as I've called it. That's the (seemingly pervasive) idea that there are just all sorts of great ideas for new drugs and new targets just gathering dust on university shelves, waiting for some big drug company to get around to noticing them. That, unfortunately, does not seem to be true, but it's a tempting idea, and I worry that people are going to be unable to resist chasing after it.
But that said, where would be the best place for the academic money to go? I have a few nominees. If we're breaking things down by therapeutic area, one of the most intractable and underserved is central nervous system disease. I note that there's already talk of a funding crisis in this area (although that article is more focused on Europe). It may come as a surprise to people outside medical research, but we still have very little concrete knowledge of what goes on in the brain during depression, schizophrenia, and other illnesses. That, unfortunately, is not for lack of trying. Looked at from the other end, we know vastly more than we used to, but it's still nowhere near enough.
If we're looking at general translational platforms and ideas, then I would suggest trying to come up with solid small-organism models for phenotypic screening. A good phenotypic screen, where you run compounds past a living system to see which ones give you the effects you want, can be a wonderful thing, since it doesn't depend on you having to unravel all the biochemistry behind a disease process. (It can, in fact, reveal biochemistry that you never knew existed). But good screens of this type are rare, outside of the infectious disease area, and are tricky to validate. Everyone would love to have more of them - and if an academic lab can come up with one, then those folks can naturally have first crack at screening a compound collection past them.
More suggestions welcome in the comments - it looks like this is going to happen, so perhaps we can at least seed this newly plowed field with something that we'd like to see when it sprouts.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development
June 15, 2011
I was talking with some colleagues about underused synthetic chemistry technologies the other day, and one that came up was high pressure. Here's a new paper from JACS looking at pressure effects on a common reaction (Michael addition), and there are quite a few others like it scattered around the literature. In general, reactions that have a lot of steric congestion, or whose transition state occupies less volume than the starting complex, will show some effects as you go to higher pressure.
But no one ever does it. Well, not quite "no one", but pretty damned few people do. I think the problem is that you need special equipment, for the most part, and you also need to have the idea of using high pressure. Both of those are in short supply. But I wonder if someone were to make a lab-friendly high pressure reactor, if it might get taken up a bit more. (Note to equipment manufacturers: I am not promising to buy the thing if you make it. But it's a thought).
+ TrackBacks (0) | Category: Life in the Drug Labs
I was going to take a shot at this article myself, a piece in The Atlantic called "The Triumph of New Age Medicine". But Matthew Herper at Forbes has done the job for me. The original article advances the thesis that modern medicine isn't doing much for chronic diseases, which is why people are turning to acupuncture, et al. Says Herper:
. . .that’s all horse microbiome. Let’s take those one by one. Saying we’re not making strides against heart disease and cancer is just, well, wrong. Look at the below chart of mortality from both, courtesy of the Centers for Disease Control and Prevention. Notice something? They’re both going down. . .Yes, the battle against heart disease and cancer is slow, grinding trench warfare, but that’s because these our diseases written by evolution into our genetic code. And we’re still winning.
He goes on to demolish one of the article's other sweeping claims - that alternative medicine focuses on prevention, but mainstream medicine doesn't. And he's got an interesting reason (which may have occurred to you before) for why most "alternative" therapies have such ardent fans. Hint: there really is a secret ingredient, which has been gradually removed from a lot of modern medical practice. . .
+ TrackBacks (0) | Category: Press Coverage | Snake Oil
June 14, 2011
There have been several headlines about a shortage of classic chemotherapy drugs recently. How do these things happen? This post at Marginal Revolution is the best short overall look at the problem that I've seen so far:
Currently there are about 246 drugs that are in short supply, a record high. These shortages are not just a result of accident, error or unusual circumstance, the number of drugs in short supply has risen steadily since 2006. The shortages arise from a combination of systematic factors, among them the policies of the FDA. The FDA has inadvertently caused drugs long-used in the United States to be withdrawn from the market and its “Good Manufacturing Practice” rules have gummed up the drug production process and raised costs.
As Alex Tabarrok says there, one pebble, or a few, won't dam up a stream. But if you keep throwing them in, something's going to happen, and I think that we've reached that point here. . .
+ TrackBacks (0) | Category: Cancer | Regulatory Affairs
We spend a lot of time thinking about proteins in this business - after all, they're the targets for almost every known drug. One of the puzzling things about them, though, is the question of just how orderly they are.
That's "order" as in "ordered structure". If you're used to seeing proteins in X-ray crystal structures, they appear quite orderly indeed, but that's an illusion. (In fact, to me, that's one of the biggest things to look out for when dealing with X-ray information - the need to remember that you're not seeing something that's built out of solid resin or metal bars. Those nice graphics are, even when they're right, just snapshots of something that can move around). Even in many X-ray studies, you can see some loops of proteins that just don't return useful electron density. They're "disordered". Sometimes, in the pictures, a structure will be put up in that region as a placeholder (and the crystallographers will tell you not to put much stock in it), and sometimes there will just be a blank region or some dotted lines. Either way, "disordered" means what it says - the protein in that region adopts and/or switches between a number of different conformations, with no clear preference for any of them.
And that makes sense for a big, floppy, loop that makes an excursion out from the ordered core of a protein. But how far can disorder extend? We have a tendency to think that the intrinsic state of a protein is a more or less orderly one, which we just refer to (if we do at all) as "folded". (You can divide that into two further classes - "properly folded" when the protein does what we want it to do, and "improperly folded" when it doesn't. There are a number of less polite synonyms for that latter state as well). Are all proteins so well folded, though?
It's becoming increasingly clear that the answer is no, they aren't. Here's a new paper in JACS that examines the crystallographic data and concludes that proteins cover the entire range, from almost completely ordered to almost completely disordered. When you consider that the more disordered ones are surely less likely to be represented in that data set, you have to conclude that there are probably a lot of them out there. Even the ones with relatively orderly regions can turn out to have important functions for their disordered parts. The study of these "intrinsically disordered proteins" (IDPs) has really taken off in the last few years. (Here's another paper on the subject that's also just out in JACS, to prove the point!)
So what's a disordered protein for? (Here's one of the key papers in the field that addresses this question). One such would have a number of conformations available to it inside a pretty small energy window, and this might permit it to have different functions, binding to rather different partners without having to do much energetically costly refolding. They could be useful for broad selectivity/low affinity situations and have faster on (or off) rates with their binding partners. (That second new JACS paper linked to above suggests that it's selection pressure on those rates that has given us so many disordered proteins in the first place). Interestingly, several of these IDPs have shown up with links to human disease, so we're going to have to deal with them somehow. Here's a recent attempt to come to grips with what structure they have; it's not an easy task. And it's not like figuring all this stuff out even for the ordered proteins is all that easy, either, but this is the world as we find it.
+ TrackBacks (0) | Category: Biological News
June 13, 2011
Now here's a biotech investing strategy that I haven't come across before. Adam Feuerstein reports on a hedge fund manager, Martin Shkreli of MSMB Capiral, who's very much short the the stock of a small company called NeoProbe. They're developing a contrast agent for lymph nodes called Lymphoseek, and Shkreli doesn't think very much of their data - thus the short trade.
Not leaving anything to chance, though, he's filed a "citizen petition" with the FDA, maintaining that there are severe problems with the regulatory filings for Lymphoseek and asking the agency to deny a review to the product. At issue is the concept of "standard of care". There's a blue dye that's FDA-approved for this lymph-mapping purpose, but it seems that in actual practice, almost everyone uses it along with a radiosulfur tracer (even though the sulfur colloid isn't specifically approved for that purpose). Lymphoseek's Phase III trials are controlled against the dye alone, which has some people wondering just how meaningful its data will be.
Shkreli discloses his investment position in his FDA position - there's really nothing underhanded about what he's doing. And as Feuerstein notes, "Citizen petitions are rarely if ever filed for altruistic reasons." But although companies have used them to throw elbows during the regulatory process, this is the first time I've ever heard of a short-seller trying this move.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs
June 10, 2011
A colleague of mine is running a Diels-Alder reaction this morning, and turned out to have never run one before, despite many years of experience in chemistry. (I'd bet, though, that a fair number of chemists who have run the reaction did it in an undergraduate lab and never have since). I've run them - although it's been a while - and I've done the Claisen rearrangement (ditto), the Knoevenagel condensation, the Barbier reaction, and the Henry reaction. I've done plenty of Horner-Emmons-Wadsworth reactions (although not in the last few years), Jones oxidation, Birch reduction, the Arbuzov reaction, and a Chichibabin pyridine synthesis, many years ago. And I've done a Cannizzaro, the Gabriel synthesis, Ferrier rearrangements, the Shapiro reaction, Peterson olefination, and Lindlar reduction. I've run Sandmeyer reactions, the Prins, Staudinger reduction, Ullmann coupling, and Weinreb ketone synthesis. I've done the Wolff–Kishner reduction (once) and Wurtz coupling (once), a Dakin-West (once), a Darzens (once), and a Delepine reaction (once).
But I've never done a straight aldol condensation, at least, not on purpose. And I've never, as far as I can recall, actually done a Fischer indole synthesis, or the lovely Skraup reaction. I've never run a Bayliss-Hillman, a Ritter reaction, a Cope rearrangement, a Julia olefination, a Pictet-Spengler, a Nazarov cyclization, nor a pinacol, and I don't think I've ever set up an ene reaction.
So what's on your list? What's the most famous reaction you've never run? Is there some reaction you've always sort of wanted to do, but never had the reason?
+ TrackBacks (0) | Category: Life in the Drug Labs
June 9, 2011
There have been quite a few headlines over the last few days like this one: "A New Drug Makes Hearts Repair Themselves". Unfortunately, that's not quite true. Not yet.
It's this paper in Nature that's getting the attention, and it is a very interesting one. The authors have identified a population of progenitor cells in the adult heart that can be induced to turn into fully differentiated myocytes after an infarction. In fewer syllables, and reasonably accurately: stem cells, already in the heart, can be made to repair it after a heart attack. And that's getting closer to that headline I was just complaining about - so what's the gap between the two?
Well, there are several rather huge factors. One of them is that the way that these cells were stimulated into action was by treatment with thymosin beta-4, which is a potent regulator of cardiac cells and blood vessel development. Tβ4 is not quite a drug yet, although RegeneRx is giving it a shot. There have been some phamacokinetic studies in animals and other preliminary work, and I wish them every good fortune. But it's got a ways to go.
Second, this study treated the animals with Tβ4 for seven days before inducing the cardiac injury. That's perfectly reasonable for a proof-of-concept study like this one, but it's not the real-world therapeutic option that you'd imagine from the press coverage. As one correspondent put it to me in an e-mail, "if you’re a mouse, and you know that later on this week you’re going to have an MI, then this is the treatment for you". That might be unfair to the original authors, who are working their way up carefully through some very tricky biology, but it's not unfair at all to the people who write headlines like the one I quoted above.
No, this is very interesting stuff, but it's quite a ways from being ready to help any of us out. This is where such therapies start, though, and we can only hope that something makes it through this time. The authors themselves know the score:
". . .The induced differentiation of the progenitor pool described into cardiomyocytes by Tβ4 is at present an inefficient process relative to the activated progenitor population as a whole. Consequently, the search is on via chemical and genetic screens to identify efficacious small molecules and other trophic factors to underpin optimal progenitor activation and replacement of destroyed myocardium.
+ TrackBacks (0) | Category: Cardiovascular Disease | Press Coverage
Nature Reviews Drug Discovery has an interesting survey of academic drug discovery (summary at SciBx here). The authors were motivated, they say, by the large number of opinions and impressions about this topic, with a corresponding lack of actual data - I think they've done everyone a service.
What they found was 78 centers of academic drug discovery (in one form or another) in the US. Cancer and infectious diseases are the most widely worked-on, but tropical and orphan diseases make a strong showing (and I'm glad to see this; they should). Another interesting stat: "49% of targets being investigated are based on unique discoveries that had little validation in the literature".
But when we say "drug discovery", we should really be saying "very early stage drug discovery", with little or no actual development to follow it up. The technologies that these centers report having are almost entirely in the early part of the pipeline - screening, in vitro assay, target ID. Capacity for hit-to-lead chemistry is claimed by 72% of the centers that responded (70% response rate), which, the authors say, shows that ". . .the integration of chemistry into (academic drug discovery) centers has progressed considerably". On the other hand, only half report the ability to do in vivo assays, and less than half can do any metabolism and/or pharmacokinetics. For those who don't do this sort of thing for a living, it's worth pointing out that these functions (all of which are valuable) still only take you to the stage where you can say that you're really getting started.
So what stage are these academic projects, for the most part? Assay development and screening, for the most part - even those places with PK and the like don't have much at all in that stage yet, which, the authors say, reflects the fact that most of these centers haven't been operating for very long. (32 of the 56 centers that provided a founding date gave one between 2003 and 2008). And I particularly enjoyed this paragraph:
"Questions regarding comparisons between academic and industrial drug discovery evoked intense and informative responses. Academia was perceived to be much stronger than industry in disease biology expertise and innovation, and was considered to be better aligned with societal goals. . . By contrast, industry was perceived to be much stronger in assay development and screening, and particularly in medicinal chemistry."
I would really enjoy seeing some of the more intense responses! But a very large divide between academia and industry is apparent when the respondees were asked about their centers' priorities. Number 3 was generating intellectual property, but number one? Publications. Half of the centers say that only a quarter of their staff (or less) have industrial experience, but my impression is that these numbers are shifting rapidly - for one thing, a lot of good, experienced people from industry are becoming much more available than they ever thought they'd be.
It's also important to realize that most of this work is being done on a very modest scale. When asked about funding and expenditures, you see a long-tail distribution. A handful of centers report total expenditures in the low tens of millions, but 57% of the responding centers report $2 million or less. I'm not sure if that's per year, or total since the centers were founded, to be honest, but either way, it's not much money at all by the standards of drug research, even the early-stage stuff. Looked at another way, though, if much comes out of these efforts at all, they'll have been cost-effective for sure.
But at that point, they're facing the same problems that the rest of us do. The SciBx piece quotes Bruce Booth, whose blog I link to here regularly. And he's right on target:
“At the end of the day, it's not typically the initial chemical matter that plagues a startup spinning out of academia. Instead it's the validity of the initial biologic hypothesis and whether the biology is relevant to disease"
+ TrackBacks (0) | Category: Academia (vs. Industry)
June 8, 2011
I haven't read it yet, but there's a new book on the whole "garage biotech" field, which I've blogged about hereand here. Biopunk looks to be a survey of the whole movement; I hope to go through it shortly.
I'm still on the "let a thousand flowers bloom" side of this issue, myself, but it's certainly not without its worries. But this is the world we've got - where these things are possible, and getting more possible all the time - and we're going to have to make the best of it. Trying to stuff it back down will, I think, only increase the proportion of harmful lunatics who try it.
By the way, since that's an Amazon link, I should note that I do get a cut from them whenever someone buys through a link on the site, and not just from the particular item ordered. I've never had a tip jar on the site, and I never plan to, but the Amazon affiliate program does provide some useful book-buying money around here at no cost to the readership.
+ TrackBacks (0) | Category: Biological News | Book Recommendations
The Supreme Court has ruled on the Roche - Stanford case that I blogged about here. In short, the dispute centered on the Bayh-Dole act (on commercializing academic research) and sought to clarify under what circumstances university collaborators signed over the rights to their discoveries. (That makes the case sound quite calm and removed from worldly concerns, but you'll see from that earlier post that it was actually nothing of the sort!)
As I and many others had predicted, Roche prevailed. The justices upheld the ruling (7 to 2) from the Court of Appeals for the Federal Circuit that the Stanford researcher(s) involved had indeed signed over rights to Roche, and that this assignment was compatible with existing law. Here's the decision (PDF). Among the key points:
1. Stanford contended that if an invention had been realized with federal funding (NIH, etc.), that the Bayh-Dole Act automatically assigned it to the university involved. The Court noted that there are, in fact, situations where patent rights are treated this way, but that this language is conspicuously missing from Bayh-Dole. Accordingly, the invention belongs to the inventor, until the inventor assigns the rights to it. And in this case, like it or not, the Stanford post-doc involved signed things over to Cetus (as was). This inventorship business goes for industry as well, of course - one of the key pieces of paper that you sign when you join a drug company assigns the rights to whatever inventions you come up with (on company time, and with its resources) to the company. If you don't sign, you don't have a job. And on the flip side, just being employed is not enough for a company to claim an invention - there has to be an explicit statement to that effect.
Here's Justice Roberts on this point:
Stanford’s contrary construction would permit title to an employee’s inventions to vest in the University even if the invention was conceived before the inventor became an employee, so long as the invention’s reduction to practice was supported by federal funding. It also suggests that the school would obtain title were even one dollar of federal funding applied toward an invention’s conception or reduction to practice. It would be noteworthy enough for Congress to supplant one of the fundamental precepts of patent law and deprive inventors of rights in their own inventions. To do so under such unusual terms would be truly surprising. . .
You might be wondering if this argument bears on the contentions of people who claim that hey, it's all NIH money in the end, so drug companies do nothing but leech off public money, right? Why yes, yes it does. Justice Breyer (joined by Justice Ginsberg) dissents, saying that the intent of Bayh-Dole is to commercialize research, and not having title automatically assign to the university (or other recipient of federal funding) undercuts this substantially. There's a lot of talk in the dissent about the background of the act, about its real intentions, and about how it's supposed to work. And I can see the force of those arguments - but to me, they don't overcome the fact that if Congress wanted Bayh-Dole to work that way, they could have written it that way. And, in fact, they still can, if they decide that this decision illuminates a flaw that they'd like to address. Until then, though, I feel safer with the statutory language that's in there already, and how it compares to other, similar laws.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Patents and IP
June 7, 2011
I found this article in The American Scholar via Arts and Letters Daily, entitled "Flacking for Big Pharma". As you might have possibly guessed from the title, it's a broadside against the advertising practices of the drug industry, and particularly against its interactions with physicians and the medical journals.
And I'll say up front that the piece is not, in fact, completely wrong. It's probably not even mostly wrong. There really are big problems in these areas, such as too-aggressive promotion, minimization of side effects, too many payments to "key opinion leaders", too many studies that don't see the light of day, and so on. And these things really do lower the respect that people have for the drug industry - assuming, by this point, that there's much respect left. But overall, this article is sort of a summary version of Marcia Angell's book, for people who would like to hate the drug industry but find themselves pressed for time. And as such, it manages to get some important things wrong in the process of getting some things right.
For example, it makes much of subgroup analysis of clinical trials, but as a way for drug companies to pull the wool over readers' eyes. I wonder how much this really happens, though, since overzealous data mining of a trial that wasn't powered to generate such conclusion is (you'd think) a well-known pitfall by now. Perhaps not, though. But the example given in the article is BiDil:
BiDil proponents published studies that supported their claim of a racially mediated genetic anomaly that was addressed by BiDil, making it an ideal drug for blacks but not for whites.. . .
NitroMed won FDA approval of a new trial that included only 1,050 black subjects, with no white subjects to provide comparison data. Furthermore, BiDil was not tested alone, but only in concert with heart medications that are already known to work, such as diuretics, beta-blockers, and angiotensin-converting enzyme (or ACE) inhibitors. The published results of the trial were heralded as a success when subjects taking the drug combinations that included BiDil enjoyed 43 percent fewer heart-failure deaths.
. . .excluding whites was a medically illogical but financially strategic move because it eliminated the possibility that the drug would test well in whites, thereby robbing NitroMed of its already thin rationale for calling BiDil a black drug. The “black” label was crucial, because BiDil’s patent covering use in all ethnic groups expired in 2007, but the patent for blacks only allows NitroMed to profit from it until 2020. BiDil is a case study in research methodology “flaws” that mask strategies calculated to make a dodgy drug look good on paper, for profit.
But this doesn't appear to be correct. First off, as the article itself mentioned earlier, the BiDil combination was originally tested (twice) in racially mixed (in fact, I believe, mostly white) trial groups. Secondly, the 1,050-patient trial in black patients was done with other therapies because to do otherwise would be unethical (see below). And what you wouldn't realize by reading all this is the BiDil, in fact, was a failure. No one's making piles of profits on BiDil until 2020, especially not NitroMed. You wouldn't even know that NitroMed itself gave up trying to sell BiDil three years ago, and that the company itself was acquired (for a whopping 80 cents a share) in 2009.
Now, about those placebo-controlled trials. This article makes much of a British Medical Journal satire from 2003 on how to make a drug look good. But it's confused:
A placebo, such as a sham or “sugar” pill, has no active ingredient, and, although placebos may evoke some poorly understood medical benefits, called the “placebo effect,” they are weak: medications tend to outperform placebos. Placebo studies are not ethical when a treatment already exists for a disorder, because it means that some in the study go untreated. However, if you care only that your new drug shines in print, testing against placebo is the way to go.
Well, which is it? We can't, in fact, run placebo-controlled trials just to "shine in print" when there's a standard of care, you know. You can only do that when there's no standard of care at all. And in those cases, what exactly should we use as a comparison? Using nothing at all (no pills, nothing) would, in fact, make our drugs look even better than they are, because of that placebo effect. This is a specious objection.
And when there's a standard of care that a new drug will be added to (as was the case with BiDil), then you actually do have to run it with those therapies in place, at least when you get to Phase III. The FDA (and the medical community) want to know how your drug is going to perform in the real world, and if patients out in that real world are taking other medications, well, you can't pretend that they aren't.
In another section, the article makes much of the Merck/Elsevier affair, where Elsevier's "Excerpta Medica" division set up some not-really-journals in Australia (blogged about here). That was, in fact, disgraceful (as I said at the time), but disgraceful apparently isn't enough:
. . .Elsevier, the Dutch publisher of both The Lancet and Gray’s Anatomy, sullied its pristine reputation by publishing an entire sham medical journal devoted solely to promoting Merck products. Elsevier publishes 2,000 scientific journals and 20,000 book-length works, but its Australasian Journal of Bone and Joint Medicine, which looks just like a medical journal, and was described as such, was not a peer-reviewed medical journal but rather a collection of reprinted articles that Merck paid Elsevier to publish. At least some of the articles were ghostwritten, and all lavished unalloyed praise on Merck drugs, such as its troubled painkiller Vioxx. There was no disclosure of Merck’s sponsorship. Librarian and analyst Jonathan Rochkind found five similar mock journals, also paid for by Merck and touted as genuine. The ersatz journals are still being printed and circulated, according to Rochkind, and 50 more Elsevier journals appear to be Big Pharma advertisements passed off as medical publications. Rochkind’s forensic librarianship has exposed the all-but-inaccessible queen of medical publishing as a high-priced call girl.
Fifty journals? Really? As far as I can tell, that figure comes from this analysis at the time, and seems to be mostly nonce publications, one-off conference proceedings, and the like. There is a whole list of "Australasian Journal of So-and-Sos", which would be the same reprint advertorials as the other Excerpta Medica stuff, but do these still exist? (Did all of them on the list, in fact, ever actually publish anything?)
You'd get the impression that Elsevier is (or was, until Big Pharma came along) an absolute shining pinnacle of the medical establishment - but, with apologies to the people I know who work there, that is unfortunately not the case. They're big, and they're very far from the worst scientific publishers out there, but some of their titles are, in fact, not adding much to the total of human knowledge. Nor has the conduct of their marketing department always been above reproach. But no, this has to be built up to look even worse than it is.
The irritating thing is that there's plenty to criticize about this industry without misrepresenting reality. But does that sell?
+ TrackBacks (0) | Category: Press Coverage | The Dark Side | The Scientific Literature | Why Everyone Loves Us
Well, one day after writing an obit for the XMRV story comes this abstract from Retrovirology. The authors, from Cornell and SUNY-Buffalo, say that they've detected other murine retrovirus transcripts from CFS patients (but not in most controls), and that these are more similar to those reported in last year's Lo and Alter paper in PNAS than they are to XMRV itself.
So perhaps the story continues, and what a mess it is at this point. I continue to think that the XMRV hypothesis itself is in serious trouble, but murine retroviruses as a class are still worth following up on. This is tough work, though, because of the twin problems of detection and contamination, and it's going to be easy for people to fool themselves.
Meanwhile, Retraction Watch has more on Science's "Expression of Concern" that I wrote about yesterday. It appears that the journal asked the authors to retract the paper (so says the Wall Street Journal, anyway) but that co-author Judy Mikovits turned them down (as might have been expected from her previous stands in this area). Science released their editorial note early because of the WSJ piece.
+ TrackBacks (0) | Category: Infectious Diseases | The Scientific Literature
June 6, 2011
Interesting post from Milkshake over at Org Prep Daily on solvents that don't get used as much as they might in synthetic chemistry. Among them: trifluoroethanol, methyl t-butyl ether, and 1-methoxy-2-propanol. Definitely worth a look for those of us who are trying to get things to work at the bench - other nominations welcomed in the comments.
And if you're looking for someone to do that, I believe that Milkshake himself is still looking for a position (unpaid advertisement!)
+ TrackBacks (0) | Category: Life in the Drug Labs
I meant to blog on this late last week, but (in case you haven't seen it) the whole putative link between XMRV and chronic fatigue syndrome seems now to be falling apart. If you want to see the whole saga via my blog posts and the links in them, then here you go: October 2009 - January 2010 - February 2010 - July 2010 - January 2011. At that last check-in, the whole thing was looking more like an artifact.
And now Science is out with a paper that strongly suggests that the entire XMRV virus is an artifact. It looks like something that's produced by the combination of two proviruses during passaging of the cells where it was detected, and the paper suggests that other human-positive samples are the result of contamination. Another paper is (again) unable to replicate detection of XMRV in dozens of samples which had previously been reported as positive, and finds some low levels of murine virus sequences in commercial reagents, which also fits with the contamination hypothesis.
With these results in print, Science has attached an "Editorial Expression of Concern" to the original 2009 XMRV/CFS paper, which touched off this whole controversy. My take: while there are still some studies ongoing, at this point it's going to take a really miraculous result to bring this hypothesis back to life. It certainly looks dead from here.
There will be also be some people who ask whether Science did the world a favor by publishing the original paper in the first place. But on balance, I'd rather have things like this get published than not, although in hindsight it's always easy to say that more experiments should have been done. The same applies to the arsenic-bacteria paper, another one of Science's recent bombshells. I'm not believing that one, either, at this point - not until I see a lot more supporting data - but in the end, I'm not sad that it was published, either. I think we're better off erring a bit on the wild-ideas end of the scale than clamping down too hard. That said, you do have to wonder if Science in particular is pushing things a bit too hard, itself. While I think that these ideas deserve a hearing, it doesn't necessarily have to be there.
+ TrackBacks (0) | Category: Infectious Diseases | The Scientific Literature
June 3, 2011
Today's Wall Street Journal has the news that the Liang insider-trading case may not be the end of the story at the FDA. The SEC has amended their complaint against Liang, adding another company's stock and another relief defendant (Liang's 87-year-old father in Shanghai, who also had his name on a brokerage account). Here's a chart of his trading, for those who are interested - make note that he did manage to lose money twice, on Pozen and Mannkind, but otherwise he hit 'em over the fence, time after time, in a most unnatural manner. (The chart also backs up my earlier speculation that Liang's trading in Vanda was what rang the alarm bells - it's far and away the biggest on the whole list).
But the Journal says that "people familiar with the case" think that Liang may have involved several other federal employees. The word is also that the insider trading may have begun well before 2006, and run to a lot more money than has been totaled so far. This might account for the delay in hearing the case - it could well be that the SEC is trying to get Liang to implicate more people as part of a plea-bargain deal. Another scientist at the FDA is apparently involved, according to the paper's sources, but that's all anyone is saying.
Rather weirdly, Liang appears to have refinanced his house ten times in the last few years - four times in as many months at one point - and taken out $350,000 in equity lines against the property. I'm not sure if he was rounding up more capital for his trading business or what. . .
+ TrackBacks (0) | Category: Regulatory Affairs | The Dark Side
It's not just the US where these single-digit-employee drug companies are going - here's an article from Cambridge (UK) on Sareum, which has two. Unfortunately, they got that way by shedding three dozen other employees, so it's not quite the same situation as I was talking about the other day, but it's interesting that the outsource-the-whole-thing model is alive in so many places.
If this is going to work, I think this is the scale it's going to work at. With a handful of people (and only one or two projects), you can keep a close eye on things, especially if you source as much of the crucial work as possible closer to home. I worry, though, that this is yet another idea that doesn't scale well, which is why I think Pfizer is asking for trouble.
+ TrackBacks (0) | Category: Business and Markets
June 2, 2011
Your genome - destiny, right? That's what some of us thought - every disease was going to have one or more associated genes, those genes would code for new drug targets, and we'd all have a great time picking them off one by one. It didn't work out that way, of course, but there are still all these papers out there in the literature, linking Gene A with the chances of getting Disease B. So how much are those worth?
While we're at it, everyone also wanted (and still wants) biomarkers of all kinds. Not just genes, but protein and metabolite levels in the blood or other tissue to predict disease risk or progression. I can't begin to estimate how much work has been going into biomarker research in this business - a good biomarker can clarify your clinical trial design, regulatory picture, and eventual marketing enormously - if you can find one. Plenty of them have been reported in the literature. How much are those worth, too?
Not a whole heck of a lot, honestly, according to a new paper in JAMA by John Ioannidis and Orestes Panagiotou. They looked at the disease marker highlights from the last 20 years or so, the 35 papers that had been cited at least 400 times. How good do the biomarkers in those papers have to be to be useful? An increase of 35% in the chance of getting the targeted condition? Sorry - only one-fifth of the them rise to that level, when you go back and see how they've held up in the real world.
Subsequent studies, in fact, very rarely show anything as strong as the original results - 29 of the 35 biomarkers show a less robust association after meta-analysis of all the follow-up reports, as compared to what was claimed at first. And those later studies tend to be larger and more powered - in only 3 cases was the highly cited study the largest one that had been run, and only twice did the largest study show a higher effect measure than the original highly cited one. Only 15 of the 35 biomarkers were nominally statistically significant in the largest studies of them.
Ioannidis has been hitting the literature's unreliability for some time now, and I think that it's hard to dispute his points. The first thought that any scientist should have when an interesting result is reported is "Great! Wonder if it's true?" There are a lot of reasons for things not to be (see that earlier post for a discussion of them), and we need to be aware of how often they operate.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
June 1, 2011
You'll all remember the big news about the arsenic-using bacteria - that Science paper from last December. What you may not realize is that the paper is only now coming out in print. The delay seems to have been to allow time for an extraordinary number of responses to be published at the same time. I'll summarize those, and the counterarguments made by the original authors.
Rosie Redfield of UBC, whose blog was one of the earliest criticisms of the paper, objects that the culture media used were not pure. She maintains that there was enough phosphate in the growth medium to account for all the cell growth seen, without having to invoke arsenic-containing DNA. She also has a problem with the way that the DNA fractions in the original paper were (not) purified, pointing out that the procedures used could easily drag along many contaminants.
In response, Wolfe-Simon et al. don't find the trace-phosphorus objection compelling, they say, because the arsenic-stimulated organisms were grown under the same P background as the controls at that point, and the arsenic group grew much better. As for the DNA purification, they go over their procedures, state that they didn't see evidence of particulate contamination, and point out that negatively-charged arsenate is unlikely to stick to DNA unless it's covalently bound.
A team from CNRS and JPL makes the point (as others did at the time of first publication) that arsenic's own redox chemistry makes the original assertion hard to believe. Under all known physiological conditions, arsenate should be less stable than arsenite, and arsenite can't be a plausible substitute for phosphate (even if you buy that arsenate can). They also believe that the bacteria are running on residual phosphorus: "GFAJ-1 appears to do all it can to harvest P atoms from the medium while drowning in As. . ."
Wolfe-Simon et al. reply by saying that they specifically looked for reduced arsenic species in the cells, without success, and suggest that something must be stabilizing arsenate that no one has yet seen or considered.
Another team response, from Hungary and Johns Hopkins, objects to the way that the P:As ratios were calculated in the paper. The error for the dry-weight arsenic percentage in the bacteria is larger than the value itself, so you can't really be sure that there was no arsenic at all. The mass spec data used in the paper, they say, also have such high fluctuations as to make the numbers unable to support the paper's claims.
In response, Wolfe-Simon et al. say that they don't find the arsenic numbers to be all that variable, considering the conditions. And the phosphorus numbers don't vary much at all, by comparison, and the arsenic numbers are always higher.
Stefan Oehler, from Greece, asks why density gradient centrifugation of the supposed arsenic-containing DNA wasn't done (as did other observers when the paper came out). As-DNA should be heavier. Comparing hydrolysis rates of the As-DNA with the normal phosphate form "could also have been easily done", and he says that without these data, the paper is unconvincing. One major suggestion he has is to see how and where the bacteria incorporate radioactive arsenic.
David Borhani (ex-Abbott) has objections that are similar to some of the others. He's not convinced that the "-P" media really don't have enough phosphorus left in them to explain the results, and says that the agarose gels shown are hard to square with the paper's claims. (The phosphorus-containing DNA looks more degraded than the putative arsenic-containing sample, for example, and the DNA being compared is of different sizes to start with). He has the same problems with the error bars as mentioned above.
Steven Benner (who, interestingly, appeared at the original press conference back in December, albeit not as a cheerleader), comes at the problem from a chemical angle. The rate constants for arsenate hydrolysis gives you an expected half-life for such esters inside a cell of seconds to minutes (at best), which doesn't seem feasible for use in biomolecules. He goes over several possibilities for ways to make such linkages more stable - or for judging the literature on arsenate stability to be wrong - and can't make any of them work. Another big problem is that the phosphates in DNA have to survive as such for numerous steps in the cell, and it's hard to see how arsenate could substitute across such a wide range of biochemistry. He'd also like to see the As-DNA subjected to hydrolysis and to enzymes such as DNA kinase or exonuclease, to see how it behaves. "Above all", he says, do the radioactive arsenic experiment.
In response, Wolfe-Simon et al. say that there's very little data on the stability of arsenate esters of anything but very small molecules - steric hindrance, among other things, would be expected to make the bioesters more stable. They refer to a paper showing that arsenate esters of glucose were much more stable than expected, for example.
Patricia Foster of Indiana suggests that the process of raising the GFAJ-1 bacteria selected for mutants that have lost their phosphate inorganic transporter (Pit) system, but have pumped up their phosphate-specific transport (Pst) system. It's been shown in E. coli, she points out, that arsenate poisons the former transporter, but actually stimulates the latter, which would account for the apparent stimulatory effect of arsenic on GFAJ-1.
Wolfe-Simon et al. respond by saying that if the Pst pathway were stimulated, they'd expect to see evidence of arsenate detoxification pathways (thioarsenate, methylation, reduction), and they don't. (That seems weird to me - surely the organism, no matter what, is seeing a lot more arsenate than it can use, and would have to do some of these things?)
Finally, James Cotner and Edward Hall of Minnesota and Vienna, respectively, note that their own work was cited in the original paper on the phosphorus content of bacteria. They object, though, saying that their phosphorus-rich experiment make a poor comparison with the GFAJ-1 case. In fact, they say, they've now published a survey of the elemental content of freshwater bacteria, and that these can actually be highly depleted in phosphorus. The phosphorus content measured in GFAJ-1 does not, in fact, fall outside of the range seen in organisms grown under naturally P-limiting conditions.
Wolfe-Simon et al. reply that Cotner and Hall's numbers are taken from individual bacteria at the low end of the range, not whole populations, making them a poor comparison. Their whole-population values, they say, are actually similar to their own phosphorus control cultures, and are both higher than the arsenate-grown bacteria.
So, in the end, the authors are sticking to their original arsenic hypothesis. They agree that analyzing DNA after separating it from the gels would be a useful experiment (as Redfield and other propose), and they also say that they did not mean to suggest that the GFAJ-1 bacteria have a "wholesale" subsitution of arsenate for phosphate, just that they do have some. And they're making GFAJ-1 available to people who want to take a crack at their own experiments.
This is really a remarkable exchange, but mostly due to its sheer concentration in time and in publication. But this is exactly how science is done, although it usually happens a bit more slowly and in a more disorganized fashion than what we're seeing here. But these extraordinary claims have brought an extraordinary response.
I think that things have gone as far as they can with the data from the original paper, and it's fair to say that that's not far enough to convince a lot of people. Next step: more data, and more experiments. One way or another, this will get detangled.
+ TrackBacks (0) | Category: General Scientific News