About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
December 24, 2012
Blogging will be light and irregular around here until after the first of the year. I'll probably post a recipe or two, as I tend to during this season, but unless something gigantic happens in the science/chemistry/pharma world, I'll be taking a blogging break. I hope that everyone out there celebrating Christmas (or other such midwinter holidays) has an enjoyable time of it. I'm lounging around myself: making cookies, roasting a leg of lamb, wrapping presents, and wishing that the full moon didn't fall smack in the middle of the break so that I could get my telescope out. Although, come to think of it, it's supposed to cloud over and snow, which I guess is Christmas-y enough. See everyone in the new year!
+ TrackBacks (0) | Category: Blog Housekeeping
December 21, 2012
Hey, it's not midnight yet in Guatemala. Well, OK, it's not C&E News, it's Chemjobber, but it should have been C&E News. . .
+ TrackBacks (0) | Category: Current Events
Merck's Tredaptive (formerly Cordaptive) has had a long and troubled history. It's a combination of niacin and Laropiprant, which is there to try to reduce the cardiovascular (flushing) side effects of large niacin doses, which otherwise seem to do a good job improving lipid profiles. (Mind you, we don't seem to know how that works, and there's a lot of reason to wonder how well it works in combination with statins, but still).
The combination was rejected by the FDA back in 2008, but approved in Europe. Merck has been trying to shore up the drug ever since, and since the FDA told them that they would not approve without more data, the company has been running a 25,000-patient trial (oh, cardiovascular disease. . .) combining Tredaptive with statin therapy. In light of the last link in the paragraph above, one might have wondered how that was going to work out, since the NIH had to stop a large niacin-plus-statin study of their own. Well. . .
The European Medicines Agency has started a review of the safety and efficacy of Tredaptive, Pelzont and Trevaclyn, identical medicines that are used to treat adults with dyslipidaemia (abnormally high levels of fat in the blood), particularly combined mixed dyslipidaemia and primary hypercholesterolaemia.
The review was triggered because the Agency was informed by the pharmaceutical company Merck, Sharp & Dohme of the preliminary results of a large, long-term study comparing the clinical effects of adding these medicines to statins (standard medicines used to reduce cholesterol) with statin treatment alone. The study raises questions about the efficacy of the medicine when added to statins, as this did not reduce the risk of major vascular events (serious problems with the heart and blood vessels, including heart attack and stroke) compared with statin therapy alone. In addition, in the preliminary results a higher frequency of non-fatal but serious side effects was seen in patients taking the medicines than in patients only taking statins.
So much for Tredaptive, and (I'd say) so much for the idea of taking niacin and statins together. And it also looks like the FDA was on target here when they asked for more evidence from Merck. Human lipid biology, as we get reminded over and over, is very complicated indeed. The statin drugs, for all their faults, do seem to be effective, but (to repeat myself!) they also seem, more and more, to be outliers in that regard.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
This can't be good. A retraction in PNAS on some RNA-driven cell death research from a lab at Caltech:
Anomalous experimental results observed by multiple members of the Pierce lab during follow-on studies raised concerns of possible research misconduct. An investigation committee of faculty at the California Institute of Technology indicated in its final report on this matter that the preponderance of the evidence and the reasons detailed in the report established that the first author falsified and misrepresented data published in this paper. An investigation at the United States Office of Research Integrity is ongoing.
As that link from Retraction Watch notes, the first author himself was not one of the signees of that retraction statement - as one might well think - and he now appears to be living in London. He appears to have left quite a mess behind in Pasadena.
+ TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature
I don't know how I've missed posting on this, but according to a recent survey, the "happiest company in America" is. . .wait for it; you'll never guess. . .Pfizer! Yep, the list apparently "honors the 50 companies that are most dedicated to cultivating happy work environments", and it's just hard to think of any large organization that's been more dedicated to that cause over the last few years than Pfizer. Right?
+ TrackBacks (0) | Category: Business and Markets
December 20, 2012
Tiny Allon Therapeutics had an ambitious plan to go after progressive supranuclear palsy, a kind of progressive brain deterioration, and thence (they hoped) to other neurodegenerative disorders. The lead compound was davunetide, an oligopeptide derived from activity-dependent neuroprotective protein, ADNP.
It was a reasonable idea, but neurodegeneration is not a reasonable area. The drug has now completely wiped out in the clinic, failing both primary endpoints in its pivotal trial. This is one example of the sort of research that most people don't ever hear about, from a small company that most people will never have heard of at all. But this is the background activity of drug research (with an all-too-common outcome), and if more people were aware of it, perhaps that would be a good thing (see today's other post).
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
John LaMattina (ex-head of Pfizer's global R&D) has a new book out about the industry, called Devalued and Distrusted. He tells Pharmalot that he got the idea to write a sequel to his earlier book, Drug Truths, when he appeared on the "Dr. Oz" show:
. . .and out of the blue last year, I got a call from The Dr. Oz show and they had a guest who wrote a book that was about America being overdosed. And when I got there, I saw a banner that says ‘four secrets drug companies don’t want you to know.’ I realized that never thought to ask the title of the show. . .It was a pretty long half hour and there were pretty much the standard attacks on industry – inventing diseases, prescribing drugs you don’t need. When I left, I clearly got the impression that the message needs to get out more. I was the only one from industry and there were all sorts of attacks. And everybody takes for granted that everything they say is absolutely right. So I decided to write a balanced book that deals with some of these issues.
Good luck to it, to him, and to us. I hope it reaches some of the people who need to hear it (which is one reason I'm highlighting it here), but I think that the ignorance out there (some of which is willful) is thick, deep, and dense. People feel differently about their health (and their health care) than they do about most other things in their lives, and there are always people ready to exploit that difference. Drug companies do so, with what I continue to think are net positive results, although there are entries on both sides of that ledger. And the people who go on about Evil Pharma. . .well, in many cases, they too have something to sell. A book or newsletter of their own, their services as a consultant or a guest on the next TV show, ads from their web site, a line of nutritional supplements and herbal wonder pills, what have you.
It's human nature to enjoy having enemies, too - something to define yourself against. It would be good to have the drug companies serve that role less often, though, and the best way to do that, I think, is still to try to help people to understand what it's like to actually discover and develop a drug. (LaMattina's been trying to get that across, too). But not everyone wants to hear about that, or will believe it when they do. There's some part of the population that believes (sometimes quite correctly) that there's something wrong with their health, and moreover, some of them believe (sometimes quite incorrectly) that this must be someone else's fault. Likely as not, some of these people will tell you, it was Big Pharma, who either made them sick in the first place, made them sicker once they took their drugs, or is to blame for not providing any drugs for them to take at all.
+ TrackBacks (0) | Category: Why Everyone Loves Us
December 19, 2012
Well, I've been away from the computer a good part of the day, but I return to find that the author of the NSF press release that I spoke unkindly of has shown up in the comments to that post. I'm going to bring those up here to make sure that his objections get a fair hearing:
I wrote this press release, and I am a bit concerned that instead of discussing the research with myself, or more importantly the researchers, you decide to attack the text.
We presented information based on research that has been underway for some time, at least two years with NSF peer-reviewed support.
Additionally, we were careful to not overstate either the technology or the impact, but to present an illustration of what the technology can do in the limited space that a press release allows.
A journalist is expected to follow the initial reading of the press release with questions for the researchers involved -- not attack the limited text that we provide as an introduction.
In my eleven years at NSF, I have never had someone attack my work -- particularly without first getting their facts straight.
Please contact the researchers to discuss the technology and limit your criticism for those thongs for which you are informed.
Media Officer for Engineering
National Science Foundation
(To add, my supervisor pointed out a stellar typo in my last line.
I'm fear that's where the discussion will go next, but if you do wish to learn more about the actual research you are disparaging, please do contact the researchers to learn more about the technology and the approach.)
Several regular readers have already responded in the comments section to that earlier post, making the point that experienced drug discovery scientists found the language in the press release hard to believe (and reminiscent of overhyped work from the past). Josh Chamot's response is reproduced here:
Thank you for the thoughtful responses. This is exactly the engagement I was hoping for.
First, I agree that hype is never what we want to communicate -- and I appreciate that skepticism is critical to ensuring accuracy and the complete communication of news. However, I do hope many of you will explore the research further so that any skepticism is completely informed.
I want to be clear that I have no intention of misleading the research or pharma communities, nor do I want to give false hope to those who might need any of the treatments that we referenced. Our language was intended to convey that the breakthrough to date is exciting, but clearly more work is needed before this can start producing drugs for patients -- and I believe we stated this.
Through links to additional information (such as the full patent application) and clear contact information for the principal investigator, it is our hope that the primary audience for the press release (reporters) will present a thorough and complete account of the work.
We do not wish to mislead, but we also cannot convey a full news story in press release format. The intent is to serve as an alert, and importantly, an accurate one.
Journalists are the primary audience for the press releases, and our system of information is reliant on their services. To the best of my knowledge, the information we presented on Parabon is accurate and states only results that Parabon has demonstrated and announced in their patent application -- the starting point for a journalist to explore the story further.
As background, the pieces I work on cover research efforts that are originally proposed to NSF in a review process informed by peers in the community. Parabon has received both Phase I and Phase II NSF small business funding, so they had succeeded in that competitive peer review twice.
That setting served as a baseline to inform my office that the research approach was a valid starting point -- however, as with almost all NSF research, this is research at the very earliest stages. I can accept that while I wrote the release to reflect this, I was not successful in conveying this clearly. However, the assertions that data in support of the research effort do not exist are incorrect.
The company first came to our office (public affairs) more than two years ago, and it is only now that the company had enough publicly available information for us to pull together an announcement of the technology and some introduction of how it works.
I have some lessons learned here in how to try to clarify caveats, but I stand by my original assertion that the research is valid and exciting. While I have no way to predict Parabon's ultimate success, I do believe that public discussion of their technique can only prove of value to the broader drug development effort -- including the identification of any obstacles that this, or a similar technique, must overcome.
I think what I'll do now is close off the comments to the previous post and have things move over to this entry, with appropriate pointers, so we don't have two discussion going on at the same time. Now, then. I'm not blaming Mr. Chamot for what went out on the wires, because I strongly suspect that he worked with what he was given. It's the people at Parabon that I'd really like to have a word with. If the press release is an accurate reflection of what they wanted to announce, then we have a problem, and it's not with Jack Chamot.
I realize that a press release is, in theory, supposed to be for the press - for reporters to use as a starting point for a real story. But how many of them do that, versus just rewording the release a bit? There are reporters who could pick up on all the problems, but there are many others who might not. The information in the Parabon release, as it stands, makes little sense to those of us who do drug discovery for a living, seems full of overstated claims, and raises many more questions than it answers. Specialists in the field (as many readers here are) will have an immediate and strong reaction to this sort of thing.
And that's one of the purposes of this blog (and of many others): to bring expertise out into the open, to provide people within some specialized area a chance to talk with each other, and to provide people outside it (anyone at all) a chance to sit in and learn about things they otherwise might never hear discussed. I think that the process that Mr. Chamot has described is an older one: scientists describe a discovery of theirs to some sort of press officer, who puts into some useful and coherent form in order to get the word out to reporters, who then can contact the people involved for more details as they write up their stories for a general readership. That's fine, but these days that whole multistep procedure is subject to disintermediation. And that's what we're seeing right now.
+ TrackBacks (0) | Category: Chemical Biology | Press Coverage
December 18, 2012
Here's an interesting challenge: over at Synthetic Remarks, there's a need for a couple of grams of 3,4-difluorothiophene. But you can't buy that much, and the literature has very little useful to say about how one would make it. So is there a practical route to the stuff (at least on paper) that's worth trying? Note that Dr. Freddy stipulates "No Sandmeyer crap, for heaven's sake", so no Balz-Schiemann chemistry, folks.
The prize? Any chemistry book worth up to $150 from Amazon, sent to your door. (Just think of the possibilities) So if any of you have any bright fluorination ideas, have a crack at it, and good luck!
+ TrackBacks (0) | Category: Chemical News
I'm having a real problem understanding this press release from the NSF. I've been looking at it for a few days now (it's been sent to me a couple of times in e-mail), and I still can't get a handle on it. And I'm not the only one. I see just this morning that Chemobber is having the same problem. Here, try some. See how you do:
Using a simple "drag-and-drop" computer interface and DNA self-assembly techniques, researchers have developed a new approach for drug development that could drastically reduce the time required to create and test medications. . ."We can now 'print,' molecule by molecule, exactly the compound that we want," says Steven Armentrout, the principal investigator on the NSF grants and co-developer of Parabon's technology. "What differentiates our nanotechnology from others is our ability to rapidly, and precisely, specify the placement of every atom in a compound that we design."
Say what? Surely they don't mean what it sounds like they mean. But they apparently do:
"When designing a therapeutic compound, we combine knowledge of the cell receptors we are targeting or biological pathways we are trying to affect with an understanding of the linking chemistry that defines what is possible to assemble," says Hong Zhong, senior research scientist at Parabon and a collaborator on the grants. "It's a deliberate and methodical engineering process, which is quite different from most other drug development approaches in use today."
OK, enough. I'd love for atom-by-atom nanotech organic synthesis and precisely targeted drug discovery to be a reality, but they aren't. Not yet. The patent application referenced in the press release is a bit more grounded in reality, but not all that much more:
The present invention provides nanostructures that are particularly well suited for delivery of bioactive agents to organs, tissues, and cells of interest in vivo, and for diagnostic purposes. In exemplary embodiments, the nanostructures are complexes of DNA strands having fully defined nucleotide sequences that hybridize to each other in such a way as to provide a pre-designed three dimensional structure with binding sites for targeting molecules and bioactive agents. The nanostructures are of a pre-designed finite length and have a pre-defined three dimensional structure
Ah, and these complexes of DNA strands will survive after in vivo dosing just exactly how? And will be targeted, via that precisely defined structure, just how? And bind to what, exactly, and with what sort of affinities? And are the binding sites on these DNA thingies, or do they bind to other things, anyway? No, this is a mess. And this press release is an irresponsible mishmosh of hype. I'd be glad to hear about some real results with some real new technology, and I'd like to ask the Parabon people to cough some up. I'd be equally glad to feature them on this blog if they can do so, but not if they're going to start talking like they're from the future and come to save us all. Sheesh.
Update: the discussion on this press release features a number of interesting comments. It's now moved over to this post, for reasons explained there. Thanks!
+ TrackBacks (0) | Category: Chemical Biology | Press Coverage
Drug research consultant Bernard Munos popped in the comments here the other day and mentioned this story from 2010 in the Indiana Business Journal. That's where we can find Eli Lilly's prediction that they were going to start producing two new drugs per year, starting in 2013. Since that year is nearly upon us, how's that looking?
Not too well. Back in 2010, Lilly's CEO (John Lechleiter) was talking up the company's plans to weather its big patent expirations, including that two-a-year forecast. Since then, the company has had a brutal string of late-stage clinical failures. In addition to the ones in that article, Lilly's had to withdraw Xigris, and results for edivoxetine are mixed. No wonder we're hearing so much about the not-too-impressive Alzheimer's drugs from them.
But, as I said here, what would I have done differently, were I to have had to misfortune of having to run Eli Lilly? I might not have placed such a big bet on Alzheimer's, but I probably would have found equally unprofitable ways to spend the money. (And in the end, the company deserves credit for taking on such an intractable disease - just no one tell Marcia Angell; she doesn't think anyone in the drug industry does any such thing).
About the only thing I'm sure of is that I wouldn't have gone around telling people that we were going to start launching two drugs a year. No one's ever been able to keep to that pace, not even in the best of times, and these sure aren't the best of times. It's tempting to think about telling the investors and the analysts that we're going to work as hard as we can, using our brains as much as we can, and we're going to launch what we're going to launch, when it's darn well ready to be launched. And past that, no predictions, OK? The only problem is, the stock market wouldn't stand for it. Ken Frazier at Merck tried something a bit like this, and it sure didn't seem to last long. Is happy talk what everyone would rather hear?
+ TrackBacks (0) | Category: Business and Markets | Drug Development
December 17, 2012
Remember that story last month about insider trading on the Wyeth bapineuzumab Alzheimer's results? Dr. Sidney Gilman of Michigan is accused of passing on the data and profiting from it. Now the New York Times has some very interesting background:
What is clear is that Dr. Gilman made a sharp shift in his late 60s, from a life dedicated to academic research to one in which he accumulated a growing list of financial firms willing to pay him $1,000 an hour for his medical expertise, while he was overseeing drug trials for various pharmaceutical makers. Among the firms he was advising was another hedge fund that was also buying and selling Wyeth and Elan stock, though the authorities have given no sign they have questioned those trades.
His conversion to Wall Street consultant was not readily apparent in his lifestyle in Michigan and was a well-kept secret from colleagues. Public records show no second home, and no indication of financial distress. Nevertheless, he was willing to share a glimpse of his lifestyle with a 17-year-old student whom he sat next to on a flight from New York to Michigan a few months ago, telling her how his Alzheimer’s research allowed him to enjoy fine hotels in New York and limousine rides to the airport.
This is hard. Experts have real value, and should be able to share the expertise that they've built up. But when these amounts of money are involved, there seems no way to do that without walking through a minefield. I get the impression that Prof. Gilman may have found out the truth of Screwtape's assertion:
"Indeed the safest road to Hell is the gradual one--the gentle slope, soft underfoot, without sudden turnings, without milestones, without signposts. . ."
+ TrackBacks (0) | Category: The Dark Side
I wrote here about "stapled peptides", which are small modified helical proteins. They've had their helices stabilized by good ol' organic synthesis, with artificial molecular bridging between the loops. There are several ways to do this, but they all seem to be directed towards the same end.
That end is something that acts like the original protein at its binding site, but acts more like a small molecule in absorption, metabolism, and distribution. Bridging those two worlds is a very worthwhile goal indeed. We know of hordes of useful proteins, ranging from small hormones to large growth factors, that would be useful drugs if we could dose them without their being cleared quickly (or not making it into the bloodstream in the first place). Oral dosing is the hardest thing to arrange. The gut is a very hostile place for proteins - there's a lot of very highly developed machinery in there devoted to ripping everything apart. Your intestines will not distinguish the live-saving protein ligand you just took from the protein in a burrito, and will act accordingly. And even if you give things intravenously, as is done with the protein drugs that have actually made it to clinical use (insulin, EPO, etc.), getting their half-lives up to standard can be a real challenge.
So the field of chemically modified peptides and proteins is a big one, because the stakes are high. Finding small molecules that modulate protein-protein interactions is quite painful; if we could just skip that part, we'd be having a better time of it in this industry. There's an entire company (Aileron, just down the road from me) working on this idea, and many others besides. So, how's it going?
Well, this new paper will cause you to wonder about that. It's from groups in Australia and at Genentech, (Note: edited for proper credit here) and they get right down to it in the first paragraph:
Stabilized helical peptides are designed to mimic an α-helical structure through a constraint imposed by covalently linking two residues on the same helical face (e.g., residue i with i + 4). “Stapling” the peptide into a preformed helix might be expected to lower the energy barrier for binding by reducing entropic costs, with a concomitant increase in binding affinity. Additionally, stabilizing the peptide may reduce degradation by proteases and, in the case of hydrocarbon linkages, reportedly enhance transport into cells, thereby improving bioavailability and their potential as therapeutic agents. The findings we present here for the stapled BH3 peptide (BimSAHB), however, do not support these claims, particularly in regards to affinity and cell permeability.
They go on to detail their lack of cellular assay success with the reported stapled peptide, and suggest that this is due to lack of cell permeability. And since the non-stapled peptide control was just as effective on artificially permeabilized cells, they did more studies to try to figure out what the point of the whole business is. A detailed binding study showed that the stapled peptide had lower affinity for its targets, with slower on-rates and faster off-rates. X-ray crystallography suggested that the modifying the peptide disrupted several important interactions.
Update: After reading the comments so far, I want to emphasize that this paper, as far as I can see, is using the exact same stapled peptide as was used in the previous work. So this isn't just a case of a new system behaving differently; this seems to be the same system not behaving the way that it was reported to.
The entire "staple a peptide to make it a better version of itself" idea comes in for some criticism, too:
Our findings recapitulate earlier observations that stapling of peptides to enforce helicity does not necessarily impart enhanced binding affinity for target proteins and support the notion that interactions between the staple and target protein may be required for high affinity interactions in some circumstances.19 Thus, the design of stapled peptides should consider how the staple might interact with both the target and the rest of the peptide, and particularly in the latter case whether its introduction might disrupt otherwise stabilizing interactions.
That would be more in line with my own intuition, for what it's worth, which is that making such changes to a peptide helix would turn it into another molecule entirely, rather than (necessarily) making it into an enhanced version of what it was before. Unfortunately, at least in this case, this new molecule doesn't seem to have any advantages over the original, at least in the hands of the Genentech group. This is, as they say, very much in contrast to the earlier reports. How to resolve the discrepancies? And how to factor in that Roche has a deal with Aileron for stapled-peptide technology, and this very article is (partly) from Genentech, now a part of Roche? A great deal of dust has just been stirred up; watching it settle will be interesting. . .
+ TrackBacks (0) | Category: Cancer | Chemical Biology | Pharmacokinetics
December 14, 2012
I wrote here in 2009 about Kynamro (mipomersen), an antisense oligonucleotide from Isis targeted LDL cholesterol levels. At the time, Isis and Genzyme were starting to look at its use in people with familial hypercholesterolaemia, and its prospects looked promising to become at least a profitable niche drug.
But the European Medicines Authority just turned down the drug, saying that its risk/benefit ratio just looks unacceptable. Efficacy was not in doubt, but a substantial number of patients stopped taking mipomersen because of side effects, including liver toxicity. That's bad enough, but the treatment groups also showed a great incidence of cardiovascular events, which is particularly worth thinking about when you're trying to lower LDL to prevent. . .cardiovascular events.
Human lipid handling continues to be a minefield for new therapies. The statins (which lower LDL through inhibiting cholesterol biosynthesis) appear, more and more, to be outliers in their safety and efficacy. That's not to say that statin drugs never have problems (they do - just ask Bayer, among others). But the risk/benefit for them does appear to be robust and positive, and how many other lipid-altering drugs can say that?
+ TrackBacks (0) | Category: Cardiovascular Disease
John LaMattina takes on Marcia Angell and her recent interview. It sounds like he made it farther into the podcast than I could:
“The drug companies do almost no innovation nowadays….. All they have to do is the late development. And that’s the clinical trials. Now that is an expensive part of the process. But it is not an innovative part of the process.”
. . .innovation doesn’t only occur in discovery research labs. Translating laboratory science into meaningful clinical science is quite challenging. Yet, many of the new drugs that are now being approved to treat various cancers have been developed through innovative paradigms and experimental methods developed by scientists and physicians in the pharmaceutical industry. For Angell to dismiss this so blithely is insulting.
"Insulting" is the word, and I have little doubt that this is a deliberate feature of Angell's take on the drug industry. Language like this gets attention. It brings in page views; it sells books. It gets you speaking engagements. As far as I can see, you bring in Marcia Angell to watch her attack pharmaceutical companies - that's her niche.
+ TrackBacks (0) | Category: Why Everyone Loves Us
So the Royal Society of Chemistry has bought the Merck Index, and plans to try to raise its profile, especially online. I wish them luck, but I'm not sure how well that's going to work out. I have a copy, but it's an old one that I got for free when a library turned over its stock. There are years that go by that its pages stay undisturbed.
I think that the chemical substance entries on Wikipedia, among other things, have moved into the space once occupied by reference works like this. Now, it's true that many people would rather point to a standard reference work like the Merck Index than to Wikipedia, and that may well be the market right there. Is there, or can there be, more of one? An advertiser-supported online substance reference might have a niche, but it would have been a bigger niche if it had been colonized ten years ago.
+ TrackBacks (0) | Category: The Scientific Literature
December 13, 2012
Now, here's something useful for all of us in drug discovery and development: "The Mayan Doomsday’s effect on survival outcomes in clinical trials":
There is a great deal of speculation concerning the end of the world in December 2012, coinciding with the end of the Mesoamerican Long Count calendar (the “Maya calendar”). Such an event would undoubtedly affect population survival and, thus, survival outcomes in clinical trials. Here, we discuss how the outcomes of clinical trials may be affected by the extinction of all mankind and recommend appropriate changes to their conduct. In addition, we use computer modelling to show the effect of the apocalypse on a sample clinical trial
I especially like the comparative survival curves, with and without the destruction of all life factored in. I wonder if a Bayesian trial design would be able to handle the End of Days more gracefully?
+ TrackBacks (0) | Category: Clinical Trials
This is a lesson that everyone should have learned many times before, but those colorful atoms are just so. . .colorful and everything. If anyone knows what element is supposed to be colored "silvery purple", See Arr Oh would like to hear from you.
+ TrackBacks (0) | Category: Chemical News
No one told me that it was "Rheumatoid arthritis clinical disaster day for companies that have enough to worry about already", but apparently that's what it is. AstraZeneca doesn't have an awful lot in its late-stage pipeline, but one of the things in it is a Syk inhibitor licensed in from Rigel, Fostamatinib. (More accurately, that's a phosphate ester prodrug of the Rigel compound - check out the structure and you'll see why a prodrug approach might have been necessary).
That's positioned as an orally active anti-inflammatory, to go up against Humira and the like. Back in Phase IIa it looked promising, although there have been concerns about blood pressure effects (disclosure of which has led to some hard feelings among some investors). But a new trial head-to-head against Humira in rheumatoid arthritis patients, it definitely comes up short. A Phase III trial will report next year, but what are the odds that it'll turn this one into a success?
And Eli Lilly is another company that doesn't need any more bad news, but they're stopping an RA therapy, too. Tabalumab, an antibody against B-cell activating factor, is also targeting the TNF pathway. This trial was in RA patients who were not responsive to methotrexate therapy, and was halted for sheer lack of efficacy, which is disturbing, since the antibody had (up until now) shown reasonable data. Lilly says that they're suspending enrollment in the clinic until they see the results (next year) of their ongoing trials.
+ TrackBacks (0) | Category: Clinical Trials
December 12, 2012
+ TrackBacks (0) | Category: Press Coverage
Rongxiang Xu is upset with this year's Nobel Prize award for stem cell research. He believes that work he did is so closely related to the subject of the prize that. . .he wants his name on it? No, apparently not. That he wants some of the prize money? Nope, not that either. That he thinks the prize was wrongly awarded? No, he's not claiming that.
What he's claiming is that the Nobel Committee has defamed his reputation as a stem cell pioneer by leaving him off, and he wants damages. Now, this is a new one, as far as I know. The closest example comes from 2003, when there was an ugly controversy over the award for NMR imaging (here's a post from the early days of this blog about it). Dr. Raymond Damadian took out strongly worded (read "hopping mad") advertisement in major newspapers claiming that the Nobel Committee had gotten the award wrong, and that he should have been on it. In vain. The Nobel Committee(s) have never backed down in such a case - although there have been some where you could make a pretty good argument - and they never will, as far as I can see.
Xu, who works in Los Angeles, is founder and chairman of the Chinese regenerative medicine company MEBO International Group. The company sells a proprietary moist-exposed burn ointment (MEBO) that induces "physiological repair and regeneration of extensively wounded skin," according to the company's website. Application of the wound ointment, along with other treatments, reportedly induces embryonic epidermal stem cells to grow in adult human skin cells. . .
. . .Xu's team allegedly awakened intact mature somatic cells to turn to pluripotent stem cells without engineering in 2000. Therefore, Xu claims, the Nobel statement undermines his accomplishments, defaming his reputation.
Now, I realize that I'm helping, in my small way, to give this guy publicity, which is one of the things he most wants out of this effort. But let me make myself clear - I'm giving him publicity in order to roll my eyes at him. I look forward to following Xu's progress through the legal system, and I'll bet his legal team looks forward to it as well, as long as things are kept on a steady payment basis.
+ TrackBacks (0) | Category: Biological News
I'm a bit baffled by Eli Lilly's strategy on Alzheimer's. Not the scientific side of it - they're going strongly after the amyloid hypothesis, with secretase inhibitors and antibody therapies, and if I were committed to the amyloid hypothesis, that's probably what I'd be doing, too. It is, after all, the strongest idea out there for the underlying mechanism of the disease. (But is it strong enough? Whether or not amyloid is the way to go is the multibillion dollar question that can really only be answered by spending the big money in Phase III trials against it, unfortunately).
No, what puzzles me is the company's publicity effort. As detailed here and here, the company recently made too much (it seemed to me and many others) of the results for solanezumab, their leading antibody therapy. Less hopeful eyes could look at the numbers and conclude that it did not work, but Lilly kept on insisting otherwise.
And now we have things like this:
"We are on the cusp here of writing medical history again as a company, this time in Alzheimer's disease," Jan Lundberg, Lilly's research chief, said in an interview.
Just as the Indianapolis-based company made history in the 1920s by producing the first insulin when type 1 diabetes was a virtual death sentence, Lundberg said he is optimistic that the drugs Lilly is currently testing could significantly slow the ultimately fatal memory-robbing disease.
"It is no longer a question of 'if' we will get a successful medicine for this devastating disease on the market, but when," said Lundberg, 59.
Ohhh-kay. The problems here are numerous. For one thing, as Lundberg (an intelligent man) well knows, insulin-for-diabetes is a much straighter shot than anything we know of for Alzheimer's. It was clear, when Lilly got their insulin business underway, that the most devastating symptoms of type I diabetes were caused by lack of insulin production in the body, and that providing that insulin was the obvious remedy. Even if it did nothing for the underlying cause of the disease (and it doesn't), it was a huge step forward. As for Alzheimer's, I understand that what Lundberg and Lilly are trying to get across here is the idea of a "successful medicine", rather than a "cure". Something that just slows Alzheimer's down noticeably would indeed be a successful medicine.
But "when, not if"? With what Lilly has in the clinic? After raising hopes by insisting that the Phase III results for solanezumab were positive, the company now says that. . .well, no, it's not going to the FDA for approval. It will, instead, conduct a third Phase III trial. This decision came after consulting with regulators in the the US and Europe, who no doubt told them to stop living in a fantasy world. So, sometime next year, Lilly will start enrolling for another multiyear shot at achieving some reproducible hint of efficacy. Given the way solanezumab has performed so far, that's about the best that could be hoped for, that it works a bit in some people, sometimes, for a while, as far as can be told in a large statistical sample. Which sets up this situation, I fear.
And this is "on the cusp. . .of writing medical history"? Look, I would very much like for Lilly, for anyone, to write some medical history against Alzheimer's. But saying it will not make it so.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
December 11, 2012
Here's a funny-looking compound for you - Ivorenolide A, isolated from mahogany tree bark, it has an 18-membered ring with conjugated acetylenes in it. That makes the 3-D structure quite weird; it's nearly flat. And it has biological activity, too (immunosuppression, as measured by T-cell and B-cell proliferation assay in vitro). Got anything that looks like this in your compound libraries? Me neither.
+ TrackBacks (0) | Category: Chemical News
Steve Usdin at BioCentury has a very interesting article (free access) following up on that surprise decision that the FDA's restrictions on off-label promotion are a violation of the First Amendment:
But companies and individuals who take the decision as a signal that the rules of the road have changed and they are now free to promote off-label indications put themselves in great legal and economic peril, attorneys who helped persuade the court to overturn Caronia's conviction told BioCentury.
At the same time, the decision by one of the country's most influential and respected courts to overturn a criminal conviction on First Amendment grounds is persuasive evidence that, in the long term, FDA will have to change some of the assumptions underpinning its regulation of medical products.
FDA, which now has lost a string of First Amendment cases, cannot forever hold on to the notion that it is empowered to prohibit drug companies and their employees from saying things that anyone else is free to say. Sooner or later, according to legal experts, the agency will have to reconcile itself with the idea that industry has the right to truthful, non-misleading speech.
Some of the people the article quotes are expecting the same thing I am - a further appeal to the Supreme Court - but no matter what, it's going to be quite a while before all the debris stops landing. Any company that tries to be the first to take advantage of what might be a new-found freedom could find itself right back in court, becoming a test case for what this ruling really means. Anyone feel like being a pioneer?
+ TrackBacks (0) | Category: Regulatory Affairs
I notied this piece on Slate (originally published in New Scientist) about Kaggle, a company that's working on data-prediction algorithms. Actually, it might be more accurate to say that they're asking other people to work on data-prediction algorithems, since they structure their tasks as a series of open challenges, inviting all comers to submit their best shots via whatever computational technique they think appropriate.
PA: How exactly do these competitions work?
JH: They rely on techniques like data mining and machine learning to predict future trends from current data. Companies, governments, and researchers present data sets and problems, and offer prize money for the best solutions. Anyone can enter: We have nearly 64,000 registered users. We've discovered that creative-data scientists can solve problems in every field better than experts in those fields can.
PA: These competitions deal with very specialized subjects. Do experts enter?
JH: Oh yes. Every time a new competition comes out, the experts say: "We've built a whole industry around this. We know the answers." And after a couple of weeks, they get blown out of the water.
I have a real approach-avoidance conflict with this sort of thing. I tend to root for outsiders and underdogs, but naturally enough, when they're coming to blow up what I feel is my own field of expertise, that's a different story, right? And that's just what this looks like: the Merck Molecular Activity Challenge, which took place earlier this fall. Merck seems to have offered up a list of compounds of known activity in a given assay, and asked people to see if they could recapitulate the data through simulation.
Looking at the data that were made available, I see that there's a training set and a test set. They're furnished as a long run of molecular descriptors, but the descriptors themselves are opaque, no doubt deliberately (Merck was not interested in causing themselves any future IP problems with this exercise). The winning team was a group of machine-learning specialists from the University of Toronto and the University of Washington. If you'd like to know a bit more about how they did it, here you go. No doubt some of you will be able to make more of their description than I did.
But I would be very interested in hearing some more details on the other end of things. How did the folks at Merck feel about the results, with the doors closed and the speaker phone turned off? Was it better or worse than what they could have come up with themselves? Are they interested enough in the winning techniques that they've approached the high-ranking groups with offers to work on virtual screening techniques? Because that's what this is all about: running a (comparatively small) test set of real molecules past a target, and then switching to simulations and screening as much of small molecule chemical space as you can computationally stand. Virtual screening is always promising, always cost-attractive, and sometimes quite useful. But you never quite know when that utility is going to manifest itself, and when it's going to be another goose hunt. It's a longstanding goal of computational drug design, for good reason.
So, how good was this one? That also depends on the data set that was used, of course. All of these algorithm-hunting methods can face a crucial dependence on the training sets used, and their relations to the real data. Never was "Garbage In, Garbage Out" more appropriate. If you feed in numbers that are intrinsically too well-behaved, you can emerge with a set of rules that look rock-solid, but will take ou completely off into the weeds when faced with a more real-world situation. And if you go to the other extreme, starting with wooly multi-binding-mode SAR with a lot of outliers and singletons in it, you can end up fitting equations to noise and fantasies. That does no one any good, either.
Back last year, I talked about the types of journal article titles that make me keep on scrolling past them, and invited more. One of the comments suggested "New and Original strategies for Predictive Chemistry: Why use knowledge when fifty cross-correlated molecular descriptors and a consensus of over-fit models will tell you the same thing?". What I'd like to know is, was this the right title for this work, or not?
+ TrackBacks (0) | Category: In Silico
December 10, 2012
You've probably seen the story that a substantial quantity (roughly fifty pounds!) of gold dust seems to have gone missing from Pfizer's labs in St. Louis. No report I've seen has any details, though, on just what Pfizer was doing with that much gold dust - the company isn't saying. I can tell you that I've never found a laboratory use for it myself dang it all.
So let's speculate! Why would a drug company need gold dust on that scale? Buying it in that form makes you think that a large surface area might have been important, unless there was some gold refinery running Double Coupon Wednesday on the stuff. Making a proprietary catalyst? Starting material for functionalized gold nanoparticles? Solid support(s) for some biophysical assay? Classy replacement for Celite for those difficult filtrations? Your ideas are welcome in the comments. . .
Update: out of many good comments, my favorite so far is: "Knowing Pfizer, I'm guessing they were planning on turning it into lead."
+ TrackBacks (0) | Category: Chemical News
There's more news on the T-cell therapy work that I wrote about here and here. The New York Times has an update, and the news continues to be encouraging. So far about a dozen leukemia patients have been treated, and while not everyone has responded, there have been several dramatic remissions. Considering that every candidate for treatment so far has been at the edge of the grave (advanced resistant disease, multiple chemotherapy failures), there's definitely something here.
This will have to be done patient-by-patient. But leukemia varies patient by patient, too, and effective therapies are probably going to have to get this granular (or more). So be it. The challenges now are to find out how to make the success rates even higher, and how to deliver this sort of treatment to larger numbers of people. Challenge accepted, as they say. . .
+ TrackBacks (0) | Category: Cancer
December 7, 2012
George Whitesides of Harvard has a good editorial in the journal Lab on a Chip. He's talking about the development of microassays, but goes on to generalize about the new technologies - how they're found, and how they're taken up (or not) by a wider audience (emphasis mine below):
Lab-on-a-chip (LoC) devices were originally conceived to be useful–that is, to solve problems. For problems in analysis or synthesis (or for other applications, such as growing cells or little animals) they would be tiny – the “microcircuits of the fluidic world.” They would manipulate small volumes of scarce samples, with low requirements for expensive space, reagents and waste. They would save cost and time. They would allow parallel operation. Sensible people would flock to use such devices.
Sensible and imaginative scientists have, in fact, flocked to develop such devices, or what were imagined to be such devices, but users have not yet flocked to solve problems with them. “Build it, and they will come” has not yet worked as a strategy in LoC technology, as it has, say, with microprocessors, organic polymers and gene sequencers. Why not? One answer might seem circular, but probably is not. It is that the devices that have been developed have been elegantly imagined, immensely stimulating in their requirements for new methods of fabrication, and remarkable in their demonstrations of microtechnology and fluid physics, but they have not solved problems that are otherwise insoluble. Although they may have helped the academic scientist to produce papers, they have not yet changed the world of those with practical problems in microscale analysis or manipulation.
Where is the disconnect? One underlying problem has been remarked upon by many people interested in new technology. Users of technology are fundamentally not interested in technology—they are interested in solving their own problems. They want technology to be simple and cheap and invisible. Developers of technology, especially in universities, are often fundamentally not interested in solving real problems—they are interested in the endlessly engaging activity of building and exercising new widgets. They want technology to be technically very cool. “Simple/cheap/invisible” and “technically cool” are not exclusive categories, but they are certainly not synonymous.
That is a constant and widespread phenomenon. There are people who want to be able to do things with stuff, and people who want stuff to do things for them, and the overlap between those two is not always apparent. What happens over time, though, in the best cases, is that the tinkerers come up with things that can be used by a wider audience to solve their own problems. Look no further than the personal computer industry for one of the biggest examples ever. If you didn't live through it, you might not realize how things went from "weird hobbyist thingies" to "neat gizmos if you have the money" to "essential parts of everyday life". Here's Whitesides again:
Here are three useful, homely, rules of thumb to remember in developing products.
• The ratio of money spent to invent something, to make the invention into a prototype product, to develop the prototype to the point where it can be manufactured, and to manufacture and sell it at a large scale is, very qualitatively, 1:10:100:1000. We university folks—the inventors at the beginning of the path leading to products—are cheap dates.
• You don't really know you have solved the problem for someone until they like your solution so much they're willing to pay you to use it. Writing a check is a very meaningful human interaction.
• If the science of something is still interesting, the “something” is probably not ready to be a product.
His second rule reminds me of Stephen King's statement on whether someone has any writing talent or not: "If you wrote something for which someone sent you a check, if you cashed the check and it didn't bounce, and if you then paid the light bill with the money, I consider you talented". It's also the measure of success in the drug industry - we are, after all, trying to make things that are useful enough that people will pay us money for them. If we don't come up with enough of those things, or if they don't bring in enough money to cover what it took to find them, then we are in trouble indeed.
More comments on the Whitesides piece here. For scientists (like me, and many readers of the blog), these points are all worth keeping in mind. Some of our biggest successes are things where our contributions are invisible to the end users. . .
+ TrackBacks (0) | Category: Business and Markets | Who Discovers and Why
Adam Feuerstein at TheStreet.com has his yearly readers' pick for "Worst Biotech CEO". This year's winner is Jim Bianco of Cell Therapeutics, and there seems to be a good case:
Bianco, a longtime worst biotech CEO nominee, broke through this year and finally shoved his way into loser's circle by managing to engineer a 77% drop in his company's price despite finally winning European approval for the its lymphoma drug.
In many ways, TheStreet's biotechnology readers are (dis)honoring Bianco for a lifetime of investor bamboozlement and self-enrichment. The numbers that define Bianco's career as chief executive of Cell Therapeutics are stunning: Total losses of more than $1.7 billion, a 99.99999999% drop in the value of company shares and total compensation for him and his hand-picked team of executive cronies in the tens of millions of dollars.
Other than that, things have been going fine. Check the post for more, and to find out who the runners-up were. You can be sure that they're not thrilled to be on the list, either. . .
+ TrackBacks (0) | Category: Business and Markets
If you'd like to see how thoroughly a drug market can be screwed up, have a look at Greece. They're leading the way here as well:
Ten years after entering the eurozone, Greece is faced with the herculean challenge of persuading pharmaceutical companies to strike a bargain and lower the cost of the medicines they sell in the country. At present, there are fears of drug shortages in certain hospitals as a result of unpaid bills. . .During the last two decades Greece became a paradise for branded-drug producers, with generic medicines constituting only 12% of the drugs consumed in the country. Between 1997 and 2007, the amount of health spending per Greek citizen grew annually by 6.6%, bringing the country to fourth place worldwide, after South Korea, Turkey and Ireland, in terms of this growth.
The crisis comes, in part, as a result of the Greek National Health System racking up debts by treating pensioners and poorer locals with expensive branded drugs instead of generics. The government paid the pharmaceuticals mostly with state bonds that lost substantial value in the fiscal crisis, and, in response, they started turning off the faucet. . .
But there's another factor at work, too:
For many months, pharmacies have been reporting shortages of medicines as some distributors have reexported comparatively cheap drugs from Greece over to Germany and other European markets, achieving monetary gains of as much as 600%.
Yep, Greece has simultaneously managed to pay too much for pharmaceuticals and provide a lucrative opportunity to export cheap ones. If economics worked like electrical engineering, there would be huge sparks jumping across these gaps and things would be shorting out all over the place. Actually, that's pretty much what's happening as it is.
+ TrackBacks (0) | Category: Drug Prices
December 6, 2012
Well, in that post on telescopes I put up the other day, there were plenty of manufacturers, web sites, and commercial sources that I could recommend. Microscopes, though, are another matter. There's no equivalent to the amateur telescope making/modifying community. One reason for that is that we're talking about lenses for magnification, rather than big mirrors for light-gathering, and mirrors are a lot easier to make (and test) than lenses, particularly combinations of lenses. Microscopes can also have more mechanical parts than telescopes do, and these parts are less modular, which can make the used equipment market rather tricky. The new equipment market tends to divide into "Wonderful, really expensive equipment for research" and "Cheap crap". (More thoughts on the similarities and differences between the amateur astronomers and microscopists here and here).
But not always. Here's a good site with a lot of buying advice, and here are more good sets of recommendations. You'll have heard of the brands of the most common laboratory microscopes (Nikon, Olympus, Leica, Zeiss), and there are a number of lesser-known brands, which I would assume all use Chinese optics (Omano, Motic, Accuscope, Labomed). The advice, as with telescopes is to Avoid Department Store Models, but beyond that, I'm not sure where to send people. Reputable dealers seem to include Lab Essentials and Microscope Depot, but be sure to read up on those recommendations before purchasing. An older microscope in good shape probably has the best price/performance of all, but that's not a casual purchase, for the most part. For what it's worth, I use an old "grey metal" Bausch and Lomb, purchased back in the 1970s used from around the University of Tennessee medical school.
Update: as those recommendation links say, there are two big choices: a stereo microscope or a compound one. The former is good for looking at whatever (larger) object you can put under it, while the latter is higher-magnification and needs, in most cases, to have something that light can pass through. I'm partial to protozoa and algae myself, so I have the latter, but the former is a very useful instrument, too. A great general reference for someone getting into microscopy is Exploring With the Microscope.
If you're into pond life as well, two excellent references are How to Know the Protozoa and How to Know the Freshwater Algae. I own both, but then, I'm a lunatic, so keep that in mind.
+ TrackBacks (0) | Category: Science Gifts
There's a new paper out that does something unique: it compares the screening libraries of two large drug companies, both of which agreed to open their books to each other (up to a point) for the exercise. The closest analog that I know of is when Bayer merged with/bought Schering AG, and the companies published on the differences between the two compound collections as they worked on merging them. (As a sideline, I hope that they've culled some of the things that were in that collection when I worked there. I actually had a gallery of horrible compounds from the files that I kept around to amaze people - it was hard to come up with a functional group that wasn't represented somewhere). That combined Bayer collection 2.75 million compounds) has now been compared with AstraZeneca's (1.4 million compounds). The two of them have clearly been exploring precompetitive collaboration in high-throughput screening, and trying to figure out how much there is to gain.
The first question that comes to mind is how the companies managed this - after all, you wouldn't want another outfit to actually stroll through your structures. They used 2-D fingerprints to get around this problem, the ECFP4 system, to be exact. That's a descriptor that gives a lot of structural information without being reversible; you can't reassemble the actual compound from the fingerprint.
So what's in these collections, and how much do the two overlap? I think that the main take-away from the paper is the answer to the second question, which is "Not as much as you'd think". Using Tanimoto similarity calculations (ratio of the intersecting set to the union set) for all those molecular fingerprints (with a cutoff of 0.70 for "similar"), they found that about 144,000 compounds in the Bayer collection seem to be duplicated in the AstraZeneca collection. Not surprisingly, these turned out to be commercially available; they'd been bought from the same vendors, most likely. That's not much!
Considering that all pharmaceutical companies can access the same external vendors this number is certainly lower than expected. There are 290K compounds that are not identical but very similar between both databases, with nearest neighbors with Tanimoto values in the range of 0.7–1.0. In a joint HTS campaign this would lead to a higher coverage of the chemical space in SAR exploration. The remaining 2.3M compounds of the Bayer collection have no similar compounds in the AstraZeneca collection, as is reflected in nearest neighbors with Tanimoto values ≤0.7. Thus, a practical interpretation is that AstraZeneca would extend their available chemical space with 2.3M novel, distinct chemical entities by testing the Bayer Pharma AG collection in a HTS campaign, provided that intellectual property issues could be resolved.
One interesting effect, though, is that compounds which would be classed as "singletons" in each collection (and thus could be a bit problematic to follow up on) had closer relatives over in the other company's collection. That could be a real advantage, rescuing what might otherwise be a collection of unrelated stuff - a few legitimate leads buried in a bunch of tedious compounds that would eventually have to be discarded one by one.
The teams also compared their collections to a large public on, the ChEMBL database:
The public ChEMBL database was chosen to simulate a third-party compound collection. It consisted of 600K molecules derived from medicinal chemistry publications annotated with pharmacological/biological data. Hence, we used this source as a proxy for ‘a pharmaceutical’ compound collection. We opted to avoid the use of commercial screening collections for this assessment as it would clearly reveal the number and source of acquisitions. In Fig. 6, we display the distribution of the nearest neighbors in the ChEMBL compounds (query collection) to the target collection corresponding to the merged AstraZeneca and Bayer Pharma AG compounds. Despite the huge set of more than 3.7 million compounds to which the relatively small ChEMBL collection is compared, more than 80% of this collection has their nearest neighbor with a Tanimoto index below 0.70. Consistent with the volume of published and patented compounds this result again emphasize that even in large collections there is still relevant unexplored chemical space accessed by other groups in industry and academia.
So the question comes up, after all these comparisons: have the two companies decided to do anything about this? The conclusions of the paper seem clear. If you're interested in high-throughput screening, combining the two collections would significantly improve the results obtained from screening either one alone. How much value does either company assign to that, compared to the intellectual property risks involved? The decision (or lack of decision) that's reached on this will serve as the best answer: revealed preference always wins out over stated preference.
+ TrackBacks (0) | Category: Drug Assays
December 5, 2012
It's a grim topic, but I see that there are worries that the Syrian government, or what's left of it, is being warned not to use its stockpiles of chemical weapons. Back in the early days of the blog, I did a series on the chemistry of these things, and they can be found by scrolling down to the bottom of this page.
As I said at the time, "I'm prepared to argue that against a competent and prepared opponent, the known chemical weapons are essentially useless. The historical record seems to bear this out. Look at the uses of mustard gas since World War I. Morocco in the 1920s, Ethiopian villages in the 1930s, Yemen in the 1960s - a motley assortment of atrocities against people who couldn't retaliate." The uses of nerve gas are a similarly horrible roll call, mainly (and infamously) in Northern Iraq, by the Saddam Hussein government against its Kurdish population. Let's hope that no one is going to add another entry to that list.
+ TrackBacks (0) | Category: Chem/Bio Warfare | Current Events
You'll have seen the headlines about off-label promotion of drugs by pharma companies. No, not the ones that decry it as a shady marketing technique, punishable by huge fines. I mean the ones about how a federal court has ruled that it's completely legal.
This came as a surprise, at least to me. The U. S. Court of Appeals, in United States v. Caronia ruled explicitly that "government cannot prosecute pharmaceutical manufacturers and their representatives under the (Food, Drug and Cosmetic Act) for speech promoting the lawful, off-label use of an FDA-approved drug." That does go up against the previous belief that if it's off-label, it isn't lawful. So how did the court get here, and what happens next?
The case concerns Alfred Caronia, a sales rep for Orphan Medical, who was prosecuted for off-label promotion of Xyrem (the sodium salt of gamma-hydroxybutyrate, GHB) in 2005. (The company has since been acquired by Jazz Pharmaceuticals of Dublin). He appealed his conviction on First Amendment grounds, and this argument seems to have rung the bell with the appeals court. Here's a writeup at the FDA Law Blog:
The Court explained that FDA’s construction of the FDCA legalizes the outcome of off-label use by doctors, but “prohibits the free flow of information that would inform that outcome.” The Second Circuit concluded that “the government’s prohibition of off-label promotion by pharmaceutical manufacturers ‘provides only ineffective or remote support for the government’s purpose.’”
There's some case law that backs up this decision, namely Sorrell v. IMS Health Inc.. The Supreme Court decision, for those of you who are truly hard-core about this stuff, is here. In that one, the court found that a Vermont law that restricted physicians from selling information on their prescription history violated the First Amendment as well. From this earlier post at the FDA Law Blog, it appears that a lot of the maneuvering during this latest case was about whether Sorrell applied here or not. That post also makes it clear that the FDA's own statements on the legality of off-label promotion are, to put it gently, unclear.
Well, this ruling certainly clears it up. For now. Here's the 82-page decision itself, with a vigorous dissent from the third judge on the appellate panel. But I can tell you that I'm not reading it yet. That's because I expect the FDA to try to take this to the Supreme Court, and it looks (to my non-lawyer eyes) like just the sort of thing they'd grant certiorari to. So I don't think this story is done - but for now, off-label promotion cannot be prosecuted.
And that's a big change indeed. This whole issue has been a black eye for the industry over the years, because (for one thing) the FDA made it clear, over and over, that it believed the practice was illegal, and that companies (and individuals) could be prosecuted for it. In that atmosphere, a company that went ahead was doing so in knowing violation of the rules as they were understood. No drug company, as far as I know, ever tried to make a First Amendment court case out of an FDA fine for off-label promotion (if anyone knows of any examples, send 'em along). Instead, they argued about whether it had happened or not, how much of it there really was, then paid the whacking fines, and then (likely as not) went out and did it some more. And they did it not because they were free-speech activists, but because that's where a lot of big money was to be found. Not the sort of thing that covers you with glory, for sure.
So it's not like this latest ruling is going to rehabilitate many reputations in the marketing departments. It's more like "Great! Turns out to be legal after all! Who knew?"
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | Why Everyone Loves Us
December 4, 2012
As I mention around here from time to time, one of my sidelines is amateur astronomy. I often get asked for telescope recommendations, so in that spirit, I wanted to put up some details in case anyone out there is thinking about one as a gift this year.
The key thing to remember with telescopes is that other things being equal, aperture wins out, because you will be able to see more objects and more details. Other things are not always equal, naturally, but that's the background of the various disputes between amateur astronomers about which kind of scope is best. And keep in mind that while a bigger scope can show you more, the best telescope is the one that you'll actually haul out and use. Overbuying has not been my problem, dang it all, but it has been known to happen. Overall, I'd say a six-inch aperture should be the starting point, although opinions vary on that, too.
You've basically got three kinds of scopes to consider: refractors, reflectors, and folded-path. The refractors are the classic lens-in-the-front types. They can provide very nice views, especially of the planets and other brighter objects. Many planetary observers swear by them. But per inch of aperture, they're the most expensive, especially since for good views you have to spring for high-end optics to keep from having rainbow fringes around everything. I can't recommend a refractor for a first scope, for these reasons. That's especially true since a lot of the refractors you see for sale out there are of the cheap/nearly worthless variety - a casual buyer would be appalled at the price tag for a decent one. No large refractors have been built for astronomical research since well before World War II.
Reflectors are variations on Isaac Newton's design, which was: open tube at the top, mirror at the bottom, and you look through the eyepiece in the side, after the light reflects back off an angled secondary mirror. All modern large-aperture research telescopes are some variety of reflector. They provide the most aperture per dollar, especially with a simple "Dobsonian" mount (more on mounts in a minute). They do have to be aligned (collimated) when you first get them, and every so often afterwards, to make sure the mirrors are all working together. A badly collimated reflector will provide ugly views indeed, but it's at least easy to fix. And if the primary mirror is of poor quality, you're also in trouble, but the average these days is actually quite good.
Finally, the folded-path (catadioptric) types (Schmidt-Cassegrain
and Maksutov designs, mostly) are a hybrid. There's a mirror in the back, but also a corrector lens plate covering the front. The light path ends up coming out the back of the tube, through a hole in the primary mirror. Like refractors, these basically never have to be aligned, but they're fairly expensive (although nowhere near as bad as refractors when you start going up in size). And their views are pretty good, although purists argue about how they compare to a reflector of equal size. (Refractor owners would probably win that argument, but they have to drop out at about the five or six-inch mark, when the other two telescope designs are just getting started). One nice thing about a scope of this kind is that it's more compact, making it an easier design to mount.
And that brings up the next topic: what you do mount one of these fine optical tubes on, so you can use them to actually look at things? An equatorial or a fork mount will let you follow the motion of the objects in the sky easily, especially with a motor drive - the Earth's rotation is always sweeping things out of your view, otherwise. A decent mount of this kind will definitely add to your costs, though. The "Dobsonian" mount is a favorite of reflector owners, since it's quite simple and allows you to put more of your money into the optics. You do have to manually grab the telescope tube and move it, though, which takes some practice (and sometimes some home-brew messing around with the mount). Some people don't mind this, others are driven nuts by it. You can put a motorized platform under a Dobsonian (my own setup) to motor-drive it, which some consider the best of both worlds.
On the topic of motorized telescope mounts, I should say something about "Go-to" models. These are not only motorized to track objects, they will slew the scope around to find objects from a database. I'm very much of two minds on these. For an experienced observer, an astrophotographer, or a researcher, they can be an indispensable tool to spend more time observing and less time hunting around. For a total beginner, they can ease a lot of frustration when first learning the sky. But at the same time, they also can keep someone from learning the sky at all, and they can also encourage hopping too quickly from one object to another. If you do that, you can see all sorts of stuff in one evening, while at the same time hardly seeing anything at all.
Visual observing is all about training yourself to see things. One thing every new telescope owner should know is that Very Little Ever Looks Like the Photographs. Especially since the photos are long exposures on wildly sensitive CCD chips, through huge instruments, and under excellent conditions. Through the eyepiece, nebulae are not tapestries of red, pink, green, and purple: they range from greenish grey to bluish grey. And although with practice you'll pick up really surprising and beautiful amounts of detail in deep-sky objects, at first, everything can look like a blob. Or a smear. Or not appear to even be there at all, even when a practiced observer can see it right smack in the center of the eyepiece field. I really enjoy seeing these things with my own eyes, and trying to find out just how much detail I can pick out and how faint I can go, but it's not for everyone.
Now, photography is another story. Astrophotography is an expensive word, although thanks to webcams and the like, getting into it is not quite as bad as it used to be. But for most purposes, you'll need one of those motorized mounts that'll track objects across the sky. That's very convenient for visual observing, too, naturally, but a really good one for long-exposure photography can cost more than the telescope itself! A motorized platform is almost never accurate enough for these purposes, I should add. I'm not an astrophotographer myself, so I won't go into great detail, but if you want to try this part of the hobby out (or know someone who does), prepare to think about the telescope mount as much as you think about the optics. As you'd imagine, all astrophotography these days is digital, with equipment ranging from simple webcams all the way up to stuff that easily costs as much as a new car, or perhaps a small house.
So, what to buy? I've scattered some Amazon links in the above to representative scopes. In general, Meade and Celestron are the two brands you'll see the most, and if you stay away from their cheap refractors, you should be fine. And Orion also sells good stuff of their own brand (On Amazonand from their own site). (Again, I'd stay away from inexpensive refractors there, too). Other good sources are Astronomics and Anacortes.
Update: as pointed out in the comments, an excellent resource for specific opinions on different models, and telescope advice in general, is Scopereviews. Cloudy Nights is also a huge resource.
+ TrackBacks (0) | Category: Science Gifts
One Alzheimer's compound recently died off in the clinic - Bristol-Myers Squibb's avagacestat, a gamma-secretase inhibitor, has been pulled from trials. The compound "did not establish a profile that supported advancement" to Phase III, says the company. Gamma-secretase has been a troubled area for some time, highlighted by the complete failure of Lilly's semagacestat. I wondered, when that one cratered, what they were thinking at BMS, and now we know.
But Merck is getting all the attention in Alzheimer's today. They've announced that their beta-secretase inhibitor, MK-8931, is moving into Phase III, and the headlines are. . .well, they're mostly just not realistic. "Hope for Alzheimer's", "Merck Becomes Bigger Alzheimer's Player", and so on. My two (least) favorites are "Merck Races to Beat Lilly Debut" and "Effective Alzheimer's Drug May Be Just Three Years Away." Let me throw the bucket of cold water here: that first headline is extremely unlikely, and the second one is insane.
As I've said here several times, I don't think that there's going to be any big Lilly debut into Alzheimer's therapy with their lead antibody candidate, solanezumab. (And if there is, we might regret it). The company does have a beta-secretase (BACE) inhibitor, but that's not what these folks are talking about. And looking at Merck's compound, you really have to wonder if there's ever going to be one there, either. I like Fierce Biotech's headline a lot better: "Merck Ignores Red Flags and Throws Dice on PhII/III Alzheimer's Gamble". That, unfortunately, is a more realistic appraisal.
It's interesting, though, that Merck is testing this approach in a patient population that includes patients with moderate cases. After solanezumab and bapineuzumab appears to have hit that target without any clear signal that they had improved symptoms for patients with more fully developed cases, there has been a growing move to shift R&D into earlier-stage patients, whose brains have not already been seriously damaged by the disease. Merck is likely to face growing skepticism that it can succeed with the amyloid hypothesis when tackling the same population that hasn't delivered positive data.
And BACE has been a rough place to work in over the years. The literature is littered with oddities, since finding a potent compound that will also be selective and get into the brain has been extremely difficult. I actually applaud Merck for having the nerve to try this, but it really is a big roll of the dice, and there's no use pretending otherwise. I wish that the headlines would get that across, as part of a campaign for a more realistic idea of what drug discovery is actually like.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
December 3, 2012
I have tried to listen to this podcast with Marcia Angell, on drug companies and their research, but I cannot seem to make it all the way through. I start shouting at the screen, at the speakers, at the air itself. In case you're wondering about whether I'm overreacting, at one point she makes the claim that drug companies don't do much innovation, because most of our R&D budget is spent on clinical trials, and "everyone knows how to do a clinical trial". See what I mean?
Angell has many very strongly held opinions on the drug business. But her take on R&D has always seemed profoundly misguided to me. From what I can see, she thinks that identifying a drug target is the key step, and that everything after that is fairly easy, fairly cheap, and very, very profitable. This is not correct. Really, really, not correct. She (and those who share this worldview, such as her co-author) believe that innovation has fallen off in the industry, but that this has happened mostly by choice. Considering the various disastrously expensive failures the industry has gone through while trying to expand into new diseases, new indications, and new targets, I find this line of argument hard to take.
So, I see, does Alex Tabarrok. I very much enjoyed that post; it does some of the objecting for me, and illustrates why I have such a hard time dealing point-by-point with Angell and her ilk. The misconceptions are large, various, and ever-shifting. Her ideas about drug marketing costs, which Tabarrok especially singles out, are a perfect example (and see some of those other links to my old posts, where I make some similar arguments to his).
So no, I don't think that Angell has changed her opinions much. I sure haven't changed mine.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us
Word comes that Fluorous is shutting down. The company had been trying for several years to make a go of it with its polyfluorinated materials, used for purification and reaction partitioning, but the commercial side of the business has apparently been struggling for a while. It's a tough market, and there hasn't, as far as I know, been what the software people would call a "killer app" for fluorous techniques - they're interested, often useful, but it's been hard to persuade enough people to take a crack at them.
The company is still taking orders for its remaining stock, and the link above will allow you to download their database of literature references for fluorous techniques, among other things. I wish the people involved the best, and I wish that things had worked out better.
+ TrackBacks (0) | Category: Business and Markets | Chemical News
Here's another next-generation X-ray crystal paper, this time using a free electron laser X-ray source. That's powerful enough to cause very fast and significant radiation damage to any crystals you put in its way, so the team used a flow system, with a stream of small crystals of T. brucei cathepsin B enzyme being exposed in random orientations to very short pulses of extremely intense X-rays. (Here's an earlier paper where the same team used this technique to obtain a structure of the Photosystem I complex). Note that this was done at room temperature, instead of cryogenically. The other key feature is that the crystals were actually those formed inside Sf9 insect cells via baculovirus overexpression, not purified protein that was then crystallized in vitro.
Nearly 4 million of these snapshots were obtained, with almost 300,000 of them showing diffraction. 60% of these were used to refine the structure, which out at 2.1 Angstroms, and clearly showed many useful features of the enzyme. (Like others in its class, it starts out inhibited by a propeptide, which is later cleaved - that's one of the things that makes it a challenge to get an X-ray structure by traditional means).
I'm always happy to see bizarre new techniques used to generate X-ray structures. Although I'm well aware of their limitations, such structures are still tremendous opportunities to learn about protein functions and how our small molecules interact with them. I wrote about the instrument used in these papers here, before it came on line, and it's good to see data coming out of it.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News