About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
May 31, 2011
My local NPR station had this report on this morning, on one-person drug companies. Can't outsource much more than that!
Here are the two companies profiled: LipimetiX and Deuteria. The former is using helical peptides to affect lipoprotein clearance, and the latter is (as you'd guess) in the deuterated-drug game, which I've most recently blogged on here. (That one's run by Sheila DeWitt, who used to work down the hall from me in grad school 25 years ago). And there are several other outfits that they could have mentioned - some of them are not quite down to one person, but you can count the employees on your fingers. In all of these cases, everything is being contracted out.
There are downsides, of course. For one thing, these are, almost by necessity, single-drug companies. It's enough of a strain just getting one project through under those conditions, let alone running a whole portfolio. So the risk is higher, given the typical failure rates in this line of work. And you have to trust your contractors, naturally. That's a bit easier to do in the Boston area (and a few other places), since you can get a lot of work sourced locally. That doesn't make it as much of a Bargain, Bargain, Bargain as it might be overseas, but at least you can drop in and see how things are going.
Another thing the NPR piece didn't address was where these projects come from. Many of them, I'd guess, are abandoned efforts from other companies that still have some possibilities. Those and the up-from-academia ideas probably take care of the whole list, wouldn't you think? Has anyone heard of one of these virtual-company ideas where the lead compound came from some sort of outsourced screen? And is an outsourced screen even possible? Now there's a business idea. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
May 30, 2011
I'm enjoying myself at home this long weekend - fencing in a vegetable garden to keep the marauding groundhogs out, hoping the sky clears up enough to use my telescope tonight, and cooking some food in the backyard. And being glad to have the opportunity to do these things - luxuries, all of them - because of the sacrifices made over the last 225 years or so. Happy Memorial Day to my U.S. readers, and I'll see everyone tomorrow!
With my Arkansas/Tennessee background, I don't have to go out for barbecue. Just get started early, and make it yourself. . .
Update: As one person in the comments section put it, "What is this, Tet. Lett.?" So, here's the procedure for this prep:
That's about 7 pounds of ribs, which is all I can hold on that smoker/grill. The key, as far as I'm concerned, is to cook them a long time over rather low heat, with plenty of wood smoke. I use no sauce at all - people from Kansas City, St. Louis, Georgia and other locales will have their own opinions about that, but I'm true to the Memphis "dry rib" tradition.
Accordingly, take the raw ribs and season them with (at a minimum) seasoning salt and ground black pepper. That's what's on the ones in the photo. There are all kinds of dry-rub recipes out there, with paprika, cumin, brown sugar, mustard powder, and who knows what else in them. If you want to try those, you can find proportions for them all over the web, or buy a commercial mix from some Memphis outfit. My advice is to go easy on anything with sugar in it - it'll get too dark, or burn outright during the cooking. That's another reason that I don't sauce things while cooking them - that, and finding most commercial sauces way too sweet and gooey. I've always had a suspicion that barbeque places that make too big a deal out of their sauce must have something to hide where the meat is concerned.
Leave the dry seasoning on the ribs overnight if you can, or at least a couple of hours. Start an indirect fire to cook them over - to the side, if you have a big cooking surface, or at least a foot below the meat if you're doing it in a tubular meat smoker like the one the in the picture. You'll also need some hickory wood chips or chunks - soaking it in water before hand will give you more smoke - but I can't say exactly how much, since different batches give varying amounts of wood smoke.
Cook the ribs over low heat - not quite enough for audible sizzling - for several hours, with frequent addition of fresh hickory wood to the fire. You'll need to add some more charcoal to the fire itself over that long a period, too, naturally, but don't go too wild, or things will get too hot. Four or five hours should about do it, but (since there are so many variables in play here), you'll need to judge for yourself. You can finish them off (carefully) on a hot grill at the end if you wish, which will render out some of the fat.
Sauce - well, that's up to you. If you're the type, then add it in the last hour or so of cooking. I serve mine on the side, and often don't take any, but (as mentioned above) I grew up near Memphis.
+ TrackBacks (0) | Category: Blog Housekeeping
May 27, 2011
Let's add to the uncertainty about whether we understand cardiovascular disease, OK? The NIH has been conducting a large statin-plus-niacin trial, which is definitely a combination worth looking at. The statin will lower your LDL, and niacin will raise your HDL and lower your triglycerides (albeit with some irritating side effects). An earlier trial of niacin versus Zetia (ezetimibe) made the former look pretty good (and Zetia look pretty bad) using an endpoint of arterial examination by ultrasound.
But now the NIH trial has been stopped, a full 18 months early. Not only did the addition of niacin show no benefit at all, but that treatment group actually had a slightly higher rate of ischemic stroke. This despite the combination working as planned, from a blood-marker standpoint. No, we really still have a lot to learn, particularly when we're trying to raise HDL and lower triglycerides. These results, together with the fenofibrate data, really make a person wonder.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials
When we last spoke about the Avastin-and-breast-cancer story here, the FDA had rescinded its provisional approval for that indication, and a number of people were shouting that here it was, health care rationing based on price, right in front of us. As I said at the time, I think that those worries were misplaced: the reason Avastin was approved for metastatic breast cancer was that it seemed to work (a little). But when the numbers were firmed up with more studies, it turned out that it didn't. The whole point of a provisional approval is that it can be rolled back if things don't work out they way that they looked at first.
Now Genentech is coming back to the FDA next month asking for approval again. Here's an op-ed in the New York Times that I think does a good job of laying out the case against the whole idea:
Genentech presented progression-free survival as a surrogate for better quality of life, but the quality-of-life data were incomplete, sketchy and, in some cases, non-existent. The best that one Genentech spokesman could say was that “health-related quality of life was not worsened when Avastin was added.” Patients didn’t live longer, and they didn’t live better.
It was this lack of demonstrated clinical benefit, combined with the potentially severe side effects of the drug, that led the F.D.A. last year to reject the use of Avastin with Taxol or with the other chemotherapies for breast cancer.
In its appeal Genentech is changing its interpretation of its own data to pursue the case. Last year Genentech argued that the decrease in progression-free survival in its supplementary studies was not due to the pairing of Avastin with drugs other than Taxol. This year, however, in its brief supporting the appeal, Genentech argues that the degree of benefit may indeed vary with “the particular chemotherapy used with Avastin.” In other words, different chemotherapies suddenly do yield different results, with Taxol being superior. The same data now generate the opposite conclusion.
Another problem, as the piece says, is that the whole cancer drug approval process has a tendency to slip into ancedotal form: tearful patients testify that the drug saved their lives. But the plural of anecdote is still not data, and never will be. In oncology, there's really not much way of being sure about any individual patient's response. There are so many different types of cancer, and they occur in so many different kinds of people. The only way to say anything useful is in a well-designed clinical trial setting.
Now, that doesn't mean that you just have to round up thousands of people with all kinds of cancer and let things rip. It's perfectly acceptable - in fact, very useful - to screen the patients that go into the trials so that you're sure that they, as far as can be told, all have the same sort of disease. But you have to do that up front to really trust the conclusions. Data-mining, running things in reverse, is tricky, and if you're going to do it, it should be used to tell you how to run your next trial, not to argue for approval. Only when you've run these kinds of experiments can you say with any certainly that a cancer therapy is useful.
But that's a hard sell, compared to someone who is convinced that they're alive because of cancer drug X (or is convinced that a loved one would be alive, if they'd only been able to get it). If you're trying to persuade a crowd (or a mob), that would be the way to go: Aristotle's appeal to pathos. But keep in mind that Aristotle (and the rest of the Greeks) looked down on that technique, and they were right. Logos, used properly, is what we're after here, mixed in with the ethos of a disinterested observer who's trying to find the truth.
And this gets to the moral dilemma at the heart of the modern drug industry: are we trying to find drugs that work? Or are we trying to sell drugs, whether they work or not? Roche/Genentech has every right to make its case and to petition the FDA for whatever decision they want. But they (and every other drug company out there) owe the rest of us, and the rest of the world, something while they're doing it: to present all the solid data they have, and to let the numbers speak for themselves. But if the numbers can't persuade, then a company should go back and get some more before trying again.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Why Everyone Loves Us
May 26, 2011
OK, here's how I understand the way that medicinal chemistry now works at Pfizer. This system has been coming on for quite a while now, and I don't know if it's been fully rolled out in every therapeutic area yet, but this seems to be The Future According to Groton:
Most compounds, and most actual chemistry bench work, is apparently going to be done at WuXi (or perhaps other contract houses?) Back here in the US, there will be a small group of experienced medicinal chemists at the bench, who will presumably be doing the stuff that can't be easily shipped out (time-critical, difficult chemistry, perhaps even IP-critical stuff, one wonders?) But these people are not, as far as I can tell, supposed to have ideas of their own.
No, ideas are for the Drug Designers, which is where the rest of Pfizer's remaining medicinal chemistry head count are to be found. These are the people who keep trac of the SAR, decided what needs to be made next, and tell the folks in China to make it. It's presumably their call, what to send away for and what to do in-house, but one gets the sense that they're strongly encouraged to ship as much stuff out as possible. Cheaper that way, right? And it's not like there's a whole lot of stateside capacity, anyway, at this point.
What if someone working in the lab has (against all odds) their own thoughts about where the chemistry should go next? I presume that they're going to have to go and consult a Drug Designer, thereby to get the official laying-on of hands. That process will probably work smoothly in some cases, but not so smoothly in others, depending on the personalities involved.
So we have one group of chemists that are supposed to be all hands and no head, and one group that's supposed to be all head and no hands. And although that seems to me to be carrying specialization one crucial step too far, well, it apparently doesn't seem that way to Pfizer's management, and they're putting a lot of money down on their convictions.
And what about the whole WuXi/China angle? The bench chemists there are certainly used to keeping their heads down and taking orders, for better or worse, so that won't be any different. But running entire projects outsourced can be a tricky business. You can end up in a situation where you feel as if you're in a car that only allows you to move the steering wheel every twenty minutes or so. Ah, a package has arrived, a big bunch of analogs that aren't so relevant any more, but what the heck. And that last order has to be modified, and fast, because we just got the assay numbers back, and the PK of the para substituted series now looks like it's not reproducing. And we're not sure if that nitrogen at the other end really needs to be modified any more at this point, but that's the chemistry that works, and we need to keep people busy over there, so another series of reductive aminations it is. . .
That's how I'm picturing it, anyway. It doesn't seem like a particularly attractive (or particularly efficient) picture to me, but it will at least appear to spend less money. What comes out the other end, though, we won't know for a few years. And who knows, someone may have changed their mind by then, anyway. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs
May 25, 2011
A reader sends along this example of a "stereodestructive" synthesis. I have nothing in particular against N-alkylpyrroles, but do we need another route to them so badly that we have to tear up not-so-cheap hydroxyproline to get there, burning up two chiral centers in the process?
Readers are invited to submit other examples from the "wad it up and throw it away" school of chiral synthesis in the comments. . .
+ TrackBacks (0) | Category: Chemical News
What on earth is happening over in Pfizer's cardiovascular department? CVMED, as it's called there, went through a nasty round of cuts in April, with many layoffs and many transfers to the Boston site. Now I'm hearing that so few people have accepted those transfers that the company has changed course and said that these people now are still going to work in Groton, which is news to the ones who have their houses on the market. . .
This sort of thing has happened before with Pfizer - I'm thinking of the people who moved to Connecticut from Michigan a few years ago, some of whom were then laid off what, six months later? But if anyone has details on this latest mess, tell us in the comments. (I've also heard that there's more layoff news there this week - details there are welcome, too).
+ TrackBacks (0) | Category: Business and Markets
May 24, 2011
Via Matt Herper, here (PDF) is an interesting survey from Quintiles, the large clinical outsourcing company, on how different groups perceive the value of new drugs.
The first problem is that not everyone can agree on what's valuable. Surveying managed-care people and physicians, the number one factor mentioned is cost. Biopharma respondents mentioned cost, but were more weighted toward outcomes (which, for example, was a factor in only 10% of the physician responses). Patients. . .well, patients mentioned cost, but not as much as the doctors or insurance people, and they hardly noted outcomes at all (single digits). "Not sure" was a front-runner.
When things were asked in a less free-form way, though, with a list of answers to choose from, patient outcomes and safety were always the top two factors among all four groups - followed by quality of life, followed by cost. An interesting discrepancy, I have to say. When asked if they agree with the statement that "All in all, the money patients spend on prescription medication is worth it", 84% of the biopharma people agree, as do 80% of the patients. Doctors were 70/30, but managed care people were 56/44. (These are all mixtures of "somewhat agree" and "strong agree", by the way).
And when asked to rank various groups according to how much value they add to health care, doctors and medical staff come out number one, no matter who's asked. "Scientists and medical researchers" come in second - except in the case of the physicians, it's a very distant second indeed. (They rank themselves so highly that there's very little left over for anyone else: 81% versus a bunch of single digits). But, interestingly, "Biopharmaceutical companies" get ranked at 11% by people in biopharma, 5% by patients, and at 1% by physicians and managed care.
So where do all these scientists and medical researchers, who are ranked much higher, actually work? Why, at pure, untainted institutes, one guesses - in spotless white coats, their minds on higher things, somewhere far away from the business of actually making and selling drugs. . .
+ TrackBacks (0) | Category: Why Everyone Loves Us
Here's an interesting note from the Wall Street Journal's Health Blog. I can't summarize it any better than they have:
"When former NIH head Elias Zerhouni ran the $30 billion federal research institute, he pushed for so-called translational research in which findings from basic lab research would be used to develop medicines and other applications that would help patients directly.
Now the head of R&D at French drug maker Sanofi, Zerhouni says that such “bench to bedside” research is more difficult than he thought."
And all across the industry, people are muttering "Do tell!" In fairness to Zerhouni, he was, in all likelihood, living in sort of a bubble at NIH. There probably weren't many people around him who'd ever actually done this sort of work, and unless you have, it's hard to picture just how tricky it is.
Zerhouuni is now pushing what he calls an "open innovation" model for Sanofi-Aventis. The details of this are a bit hazy, but it involves:
". . .looking for new research and ideas both internally and externally — for example, at universities and hospitals. In addition, the company is focusing on first understanding a disease and then figuring out what tools might be effective in treating it, rather than identifying a potential tool first and then looking for a disease area in which it could be helpful."
Well, I don't expect to see Sanofi's whole strategy laid out in the press, but that one doesn't even sound as impressive as it sounds. The "first understanding a disease" part sounds like what Novartis has been saying for some time now - and honestly, it really is one of the things that we need, but that understanding is painfully slow to dawn. Look at, oh, Alzheimer's, to pick one of those huge unmet medical needs that we'd really like to address in this business.
With a lot of these things, if you're going to first really understand them, you could have a couple of decades' wait on your hands, and that's if things go well. More likely, you'll end up doing what we've been doing: taking your best shot with what's known at the moment and hoping that you got something right. Which leads us to the success rates we have now.
On the other hand, maybe Zerhouni should just call up Marcia Angell or Donald Light, so that they can set him straight on the real costs of drug R&D. Why should we listen to a former head of the NIH who's now running a major industrial research department, when we can go to the folks who really know what they're talking about, right? And I'd also like to know what he thinks of Francis Collins' plan for a new NIH translational research institute, too, but we may not get to hear about that. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History
May 23, 2011
Sorry about no new post today - it's been a lively day around here, for a number of reasons (none of them bad). New content tomorrow!
+ TrackBacks (0) | Category: Blog Housekeeping
May 20, 2011
Here's a general question for all you lab types, prompted by some rearranging that I've been doing over at my bench: what piece of equipment do you get the least use out of for the space it takes up? Those dusty items that haven't been touched in a couple of years are obvious candidates, but feel free to add some instruments that work, but crowd out other useful items. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
Now, this is a strange little paper in Chem. Comm. The authors are studying small reverse micelles (RMs, basically, for those of you not in the field, bits of water enclosed by a layer of soap-like organic molecules).
Nothing wrong with that - micelles and reverse micelles have been objects of study for many years now. But they're saying that when they look at positively charged molecules and the way that they associate with positively charged RMs - that once the size of the reverse micelles gets small enough, that like charges attract instead of repel:
Comparing the results in the RMs and in the conventional micelles, it is quite evident that the violation in the principle of electrostatic interaction is not a general phenomenon and is quite speciﬁc for the nano-conﬁned environment, like in RMs. Thus, the charged surface formed under the nano-conﬁnement shows quite extraordinary electrostatic behaviour as compared to other normal charged surfaces.
They have some possible explanations, such as the large number of counterions in the small micellar pool of water providing electrostatic screening. They go on to suggest that if this effect is robust, that it could have real implications for behavior in biological systems (and for various drug-carrier ideas). Any thoughts from the more physical-chemistry oriented members of the crowd?
+ TrackBacks (0) | Category: Chemical News
May 19, 2011
Hmm. Remember when the Nobel Prize came out for telomere research? Now there are competing companies offering telomere-length screening, and one of them (Telome Sciences) was partly founded by Elizabeth Blackburn, one of the Nobel awardees. That isn't going down well with. . .one of the other awardees:
But among the critics of such tests is Carol Greider, a molecular biologist at Johns Hopkins University, who was a co-winner of the Nobel Prize with Dr. Blackburn.
Dr. Greider acknowledged that solid evidence showed that the 1 percent of people with the shortest telomeres were at an increased risk of certain diseases, particularly bone marrow failure and pulmonary fibrosis, a fatal scarring of the lungs. But outside of that 1 percent, she said, “The science really isn’t there to tell us what the consequences are of your telomere length.”
Dr. Greider said that there was great variability in telomere length. “A given telomere length can be from a 20-year-old or a 70-year-old,” she said. “You could send me a DNA sample and I couldn’t tell you how old that person is.”
Grieder is also a former student of Blackburn's, which makes things even messier. I can see why she's uneasy. Looking over the news accounts, there's an awful lot of noise and hype - all kinds of stuff about "Test Predicts How Long You'll Live!" and so on. The hype has been building for some time, though, and I'll bet that we're nowhere near the crest. As for me, I'm not rushing out to check my telomeres until I know what that means (and until I know if there's anything I can do about it).
+ TrackBacks (0) | Category: Biological News | Business and Markets
So Avandia (rosiglitazone) will be pulled from the market this fall. I've already written a few pieces on that whole market - PPAR ligands - but this still makes a person think. (See this post for the whole list). Starting in the mid-1990s or so, a huge amount of time, effort, and money went into PPAR alpha, gamma, and delta compounds. (In the interests of full disclosure, some of that effort was mine). And a lot of the interest was sparked by the possibilities of rosiglitazone (and its cousin pioglitazone, sold as Actos, which remains on the market). This was the first mechanism that looked to actually target some of the underlying defects in Type II diabetes, although no one was sure quite how. And "Rosi" was the PPAR-gamma ligand that all others were compared to in the labs.
But now it's gone, and the PPAR field is comparatively moribund. Glaxo, SmithKline (before the merger), Merck, Lilly, BMS, Bayer, Kyorin, Ligand - all these companies and many others poured resources into the field, and here's what we're left with: the three earliest PPAR_gamma compounds made it through, and now two of them (troglitazone left early) are gone. So one gamma ligand, one alpha (fenofibrate, the one that everyone started with), and no delta. None of the combinations (and boy, were there a lot of them) every made it, either. Two drugs out of the whole field, and neither of them discovered after the target-based approach kicked in. Yikes. And people still want to know why their prescriptions cost so much.
+ TrackBacks (0) | Category:
May 18, 2011
Abbott has some difficult times ahead with their fenofibrate franchise. That's TriCor, and its newer formulation, TriLipix. Fenofibrate, as I've mentioned here before, is an oddity among drugs. It was discovered way before anyone had a mechanism of action, and even now, while it's supposed to be a PPAR-alpha ligand, no one's completely happy with that explanation. (For one thing, it's not very potent at that nuclear receptor, while other PPAR-alpha compounds have crashed in clinical trials for various reasons). But it can lower triglycerides and raise HDL, which should both (in theory) be beneficial effects, and it's been a big seller over the years.
But how much good does it do? That's always the big, important, slow question in the cardiovascular field. The data for fenofibrate have always been somewhat messy (although probably positive overall), but a new study has muddied things up. As the FDA puts it, in the documents for an advisory committee meeting tomorrow (PDF):
Over the last 40 years laboratory and clinical data have suggested the potential of fibrates to reduce cardiovascular risk. However, data from large clinical outcomes trials have produced mixed results. The inconsistent outcomes may be a result of differences in pharmacodynamic properties among individual fibrates or study populations or both.
The new data, from a trial called ACCORD-Lipid, is another one looking at fenofibrate plus a statin, which is the usual combination (that way, at least in theory, you go after triglycerides, low HDL, and high LDL simultaneously). But this trial, in a large population of diabetic patients, showed that overall, the rate of major adverse cardiovascular events (MACE) was statistically identical between the statin/fenofibrate and statin/placebo groups. No advantage! It gets trickier with a bit of subgroup analysis: women showed some evidence of worse outcomes with fenofibrate as opposed to statin alone. The group that seemed to benefit, on the other hand, were the patients who started out with the highest triglycerides and the lowest HDL. (See that FDA file above for all the numbers and more).
That's disconcerting. Is fenofibrate only helping the worst-off patients, and doing nothing (or worse) for the others? That a question worth wrestling with for a drug that sold well over a billion dollars last year. And beyond that is the same sort of question that came up when all the ezetimibe data hit: how much do we really know about blood markers versus real cardiovascular outcomes? Can you hit the various numbers by different routes, some of which are beneficial and some of which aren't? What is it that we're not understanding?
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Regulatory Affairs
Tim Harford (author of The Undercover Economist and The Logic of Life has a new book coming out, called Adapt. It's about success and failure in various kinds of projects, and excerpts from it have been running over at Slate. The first installment was a look at the development (messy and by no means inevitable) of the Spitfire before World War II (I'd also add the de Havilland Mosquito as another example of a great plane developed through sheer individual persistence). And the second one is on biomedical research, which takes it right into the usual subject matter around here:
In 1980, Mario Capecchi applied for a grant from the U.S. National Institutes of Health. . .Capecchi described three separate projects. Two of them were solid stuff with a clear track record and a step-by-step account of the project deliverables. Success was almost assured.
The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse's DNA. It is hard to overstate how ambitious this was, especially back in 1980. . .The NIH decided that Capecchi's plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects. . .
What did Capecchi do? He took the NIH's money, and, ignoring their admonitions, he poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn't been able to show strong enough initial results in the three-to-five-year time scale demanded by the NIH, they would have cut off his funding. Without their seal of approval, he might have found it hard to get funding from elsewhere. His career would have been severely set back, his research assistants looking for other work. His laboratory might not have survived.
Well, it worked out. But it really did take a lot of nerve; Harford's right about that. He's not bashing the NIH, though - as he goes on to say, their granting system is pretty similar to what any reasonable gathering of responsible people would come up with. But:
The NIH's expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can't-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks—one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organization, especially one funded by taxpayers. But it takes too few risks. It isn't right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don't want to take a chance.
Harford goes on to praise the Howard Hughes Medical Institute's investigator program, which is more explicitly aimed at funding innovative people and letting them try things, rather than the "Tell us what you're going to discover" style of many other granting agencies. Funding research in this style has been advocated by many people over the years, including a number of scientific heroes of mine, and the Hughes approach seems to be catching on.
It isn't straightforward. You want to make sure that you're just not just adding to the Matthew Effect by picking a bunch of famous names and handing them the cash. (That's the debate in the UK after a recent proposal to emulate the HHMI model). No, you're better off finding people with good ideas and the nerve to pursue them, whether they've made a name for themselves yet or not, but that's not an easy task.
Still, I'm very happy that these changes in academic funding are in the air. I worry that our system is sclerotic and less able to produce innovations than it should be, and shaking it up a bit is just what's needed.
+ TrackBacks (0) | Category: Who Discovers and Why
May 17, 2011
Yesterday's look into the Google Ngram data set brought up a discussion in the comments on how good the numbers are in it (and in other large datasets). "Garbage in, garbage out" is as true a statement as ever, so it's a real worry. (Even if the data were perfect, the numbers could still be misused and misinterpreted, of course).
An e-mail from a reader pointed me to another example of this sort of thing. The NIH Chemical Genomics Center (NCGC) has a collection of known pharmaceutically active compounds for use in screening and target ID. This is a good idea, and the same sort of thing is done internally in the drug industry. But the ChemConnector blog has some questions about how robust the dataset is. The rough estimate is that between 5 and 10% of the 7600+ structures are messed up in some way (stereochemistry, salt form, the dreaded pentavalent carbon, and so on).
Read the comments there for some interesting back-and-forthing with the NIH people. The NCGC folks realize that they have some problems, and are willing to put in the work to help clean things up. The problem is, they'd already published on this list, calling it "definitive, complete, and nonredundant", which now seems to be a bit premature. . .
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
Venture capitalist Bruce Booth has moved his blog over to the Forbes network, and in his latest post he has some solid advice for people who are preparing to pitch him (and people like him) some ideas for a new company. It's very sensible stuff, including the need to bring as much solid data as you can possibly bring, not to spend too much time talking about how great everyone on your team is, and not to set off the hype detectors. (Believe it, everyone who's dealt with early-stage biotech and pharma has a very sensitive, broad-spectrum hype detector, and the "off" switch stopped working a long time ago).
He also has some advice that might surprise people who haven't been watching the startup industry over the last few years: "Unless you are really convinced you have a special story that Wall Street will love, please don’t use that three-letter word synonymous with so much value destruction: I-P-O." That's the state of things these days, for better or worse - the preferred exit strategy is to do a good-sized deal with a larger company, and most likely to be bought outright.
And this is advice that I wish that more seminar speakers would follow, not just folks pitching a company proposal:
It's annoying when an entrepreneur touting a discovery-stage cancer program has multiple slides on how big the market is for cancer drugs, what the sales of Avastin were last year, what the annual incidence of the big four cancers are, etc… These slides give me a huge urge to reach for my Blackberry. We know cancer is huge. Unless you’ve got a particular angle on a disease or market that’s unique or unappreciated, don’t bother wasting time on the macro metrics of these diseases, especially when you’re in drug discovery.
Yes indeed, and that goes for anyone who's talking outside the range of their expertise. If you're giving a talk, it should be on something that you know a lot about - more than your audience, right? So why do we have to sit through so many chemists talking about molecular biology, molecular biologists talking about market size, and so on? My rule on that stuff is to hold it down to one slide if possible, and to skip through it lightly even then. I've even seen candidates come in for an interview and spend precious time, time that could be spent showing what they can do and why they should be hired, on telling everyone things that they already know and don't care to hear again.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | How To Get a Pharma Job
May 16, 2011
A comment to the last post mentioned that if you search the word "biotechnology" in Google's Ngram search engine, something odd happens. There's the expected rise in the 1970s and 80s, but there's also a bump in the early 1900s, for no apparent reason. Curious about this, I ran several other high-tech phrases through and found the exact same effect.
Here's a good example, with some modern physics phrases. And you get the same thing if you search "nanotechnology", "ribosome", "atomic force microscope", "RNA interference", "laser", "gene transfer", "mass spectrometer" or "nuclear magnetic resonance". There's always a jump back in exactly the same period on the early 1900s.
So what's going on? I can understand some OCR errors, but why do these things show up in this specific Edwardian-age window? Can anyone at Google shed any light on this?
+ TrackBacks (0) | Category: General Scientific News | The Scientific Literature
I was thinking the other day that I never remembered hearing the phrase "Big Pharma" when I first got a job in this business (1989). Now I have some empirical proof, thanks to the Google Labs Ngram Viewer, that the phrase has only come into prominence more recently. (Fair warning: you can waste substantial amounts of time messing with this site). Here's the incidence rate of "big pharma" in English-language books from 1988 to 2000.
It comes from nowhere, blips to life in 1992, doesn't even really get off the baseline until 1994 or so, and then takes off. (The drops in 2005 and 2008 remain unexplained - did the log phase of its growth end in 2004?)
Update: that graph holds for the uncapitalized version of the phrase. If you put the words in caps, you get the even more dramatic takeoff shown below:
To be fair, though, there seems to have been a general rise in Big Pharma-related literature during that period. Try out this graph, comparing mentions of Merck, Pfizer, and Novartis since 1970. The last-named, of course, didn't even exist until the early 1990s, but they (like the others) have spent the time since then zipping right up, with no apparent end in sight. (Merck, especially - what's with those guys?) And what accounts for this? Business books? Investing guides? Speculation is welcome.
Note: the above paragraph was written before realizing that the Google Ngram search is case-sensitive - so, as was pointed out in the comments, I was picking up on people not caring about capitalization more than anything else. Below is the correct graph, with initial capitals in the search, and it makes more sense. Merck still is the king of book mentions, though, for all the coverage that Pfizer gets.
I'll finish off with this one, using a longer time scale. Yes, folks, for better or worse, it appears that the phrase "organic chemistry" peaked out between book covers around 1950, and has been declining ever since. Meanwhile, "total synthesis" starting rising during the World War II era (penicillin?), and kept on moving up until a peak around 1980. Interestingly, things turned around in 2000 or so, and especially since 2003. And this can't be ascribed to some sort of general surge in chemistry publications - look at the "organic chemistry" line during the same period. Is there some other field that's adopted the phrase?
+ TrackBacks (0) | Category: Drug Industry History | General Scientific News | The Scientific Literature
May 13, 2011
Not a common occurrence, that. But this Wall Street Journal article goes into details on some efforts to improve the synthetic route to Viread (tenofovir) (or, to be more specific, TDF, the prodrug form of it, which is how it's dosed). This is being funded by former president Bill Clinton's health care foundation:
The chasm between the need for the drugs and the available funding has spurred wide-ranging efforts to bring down the cost of antiretrovirals, from persuading drug makers to share patents of antiretrovirals to conducting trials using lower doses of existing drugs.
Beginning in 2005, the Clinton team saw a possible path in the laboratory to lowering the price of the drugs. Mr. Clinton's foundation had brokered discounts on first-line AIDS drugs, many of which were older and used relatively simple chemistry. Newer drugs, with advantages such as fewer side effects, were more complex and costly to make. . .A particularly difficult step in the manufacture of the antiretroviral drug tenofovir comes near the end. The mixture at that point is "like oatmeal, making it very difficult to stir," explained Prof. Fortunak. That slows the next reaction, a problem because the substance that will become the drug is highly unstable and decomposing, sharply lowering the yield.
Fortunak himself is a former Abbott researcher, now at Howard University. One of his students does seem to have improved that step, thinning out the reaction mixture (which was gunking up with triethylammonium salts) and improving the stability of the compound in it. (Here's the publication on this work, which highlights that step, formation of a phosphate ester, which is greatly enhanced with addition of tetrabutylammonium bromide). This review has more on production of TDF and other antiretrovirals.
This is a pure, 100% real-world process chemistry problem, as the readers here who do it for a living will confirm, and it's very nice to see this kind of work get the publicity that it deserves. People who've never synthesized or (especially) manufactured a drug generally don't realize what a tricky business it can be. The chemistry has to work on large scale (above all!), and do so reproducibly, hitting the mark every time using the least hazardous reagents possible, which have to be reliably sourced at reasonable prices. And physically, the route has to avoid extremes of temperature or pressure, with mixtures that can be stirred, pumped from reactor to reactor, filtered, and purified without recourse to the expensive techniques that those of us in the discovery labs use routinely. Oh, and the whole process has to produce the least objectionable waste stream that you can come up with, too, in case you've got all those other factors worked out already. Not an easy problem, in most cases, and I wish that some of those people who think that drug companies don't do any research of their own would come down and see how it's done.
To give you an example of these problems, the paper on this tenofovir work mentions that the phosphate alkylation seems to work best with magnesium t-butoxide, but that the yield varies from batch to batch, depending on the supplier. And in the workup to that reaction, you can lose product in the cake of magnesium salts that have to be filtered out, a problem that needs attention on scale.
According to the article, an Indian generic company is using the Howard route for tenofovir that's being sold in South Africa. (Tenofovir is not under patent protection in India). Interestingly, two of the big generic outfits (Mylan and Cipla) say that they'd already made their own improvements to the process, but the question of why that didn't bring down the price already is not explored. Did the Clinton foundation improve a published Gilead route that someone else had already fixed? Cipla apparently does the same phosphate alkylation (PDF), but the only patent filing of theirs that I can find that addresses tenofovir production is this one, on its crystalline form. Trade secret?
+ TrackBacks (0) | Category: Chemical News | Drug Development | Drug Prices | Infectious Diseases
May 12, 2011
So how well has raiding the biotech sector (Biogen, Genzyme) worked out for Carl Icahn? According to this estimate in the Boston Globe, he's made a lot of money. But (and here's a big point that the article doesn't, in my view, make enough of). . .he hasn't really made more than he would have made by investing in the biotech sector as a whole.
Naturally, he's beaten the S&P all to pieces, as private equity fund darn well should if it can. But the Biotech stock index has been on a tear, too, and he hasn't beaten it by much. So how would it have been if he'd just stayed home with his money and bought the basket of stocks, eh? Not nearly as much fun, and not much chance to influence the directions of whole companies, which is what a mover and shaker like Icahn lives for. But still. . .
+ TrackBacks (0) | Category: Business and Markets
May 11, 2011
Via John Hawks, here's an interesting interview with writer John McPhee, known to many for his long-form explorations (and explanations) of geology.
When he starting doing that in the New Yorker, though, editor Wallace Shawn told him to go ahead, although he warned him that "readers will rebel". And that they did - McPhee says that he got extremely polarized feedback from those pieces: loved them, loathed them. His explanation?
Two cultures. There are some people whose cast of mind admits that sort of stuff, and there are others who are just paralyzed by it at the outset, no matter how crafty the writing might be. A really nice thing that happens is when people say, I never thought I’d be interested in that subject until I read your piece. These letters come about geology too, but there are some people who just aren’t going to read it at all. Some lawyer in Boston sent me a letter—this man, this adult, had gone to the trouble to write in great big letters: stop writing about geology. And it’s on the letterhead of a law firm in Boston. I did not write back and say, One thing this country could very much use is one less lawyer. Why don’t you stop doing law?
Good point! But I know what he's talking about. I remember William Rusher, who used to publish National Review, writing about how he had to tell a colleague that "there is no concept so simple that I can fail to understand it when presented as a graph". That made me feel the two cultures divide, for sure. But it's perhaps not as stark as the classic C. P. Snow formulation: there are plenty of scientists who appreciate literature and the arts, and (as McPhee notes), there are plenty of people who know more about the humanities who find that they enjoy scientific topics once they're exposed to them.
My two cultures, then are the people who can appreciate science and the arts, versus the people who can appreciate only one of the two. (I'm leaving aside people who can't appreciate either one). So there are one-mode-only folks like the lawyer who wrote above (or William Rusher), and the corresponding scientists and engineers who might never pick up a book or appreciate a painting. And it may just be my own prejudices speaking, but I think that there are more one-mode-onlies who fit that first description, and that actually does take us back to a famous quote from C. P. Snow:
A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare's?
Right he was, and is. If you (scientist or literary type), are up for an in-depth discussion on the Second Law, with reference to Shakespeare and much else, try this by Frank Lambert, who's put a lot of thought into the subject.
+ TrackBacks (0) | Category: General Scientific News
Word reached me yesterday that Corwin Hansch, long of Pomona College, had died. Anyone who's ever done (or thought about) trying to apply mathematical techniques to compound structure-activity relationships has internalized some of his work. (Here's an intro, for those who haven't encountered classical QSAR).
I was quite excited about using such techniques (and their successors) early in my career, but ran into difficulty applying them in the real world. There were several complications - our compounds were (very likely) in several different SAR series, so combining them wasn't doing the analysis any favors; we had gaps in the compound space that would have helped refine the calculations (but were difficult to prepare and not felt to be worth the trouble to make), and, perhaps most importantly, the underlying assay data might not have been as tight as it needed to be to give sensible answers. These problems are not unique.
But that said, Hansch deserves a lot of credit for going after the whole idea of applying linear free-energy relationships to med-chem activity, and for having the fortitude to do so in the computationally deficient early 1960s. It's because of his work (and the many people who followed his lead) that we've come to realize how tricky these problems are. He was indeed a pioneer.
I've also been remiss in not mentioning the unexpected death of David Gin of Illinois and then Sloan-Kettering. Gin was an excellent synthetic chemist who tackled some very difficult problems in carbohydrate chemistry, among other areas - here's just one example, and there are many more. He surely had many more discoveries left to make, and his loss is a loss to the field.
+ TrackBacks (0) | Category: Chemical News
May 10, 2011
I had to use some potassium permanganate a little while back - first time in years I'd had any of it out in the lab, and I was reminded of just what a spectacular purple color the stuff has.
There's some of it dissolving in water, via Flickr, and it's hard to beat for sheer purplelosity. But the solid doesn't look as impressive; it's quite dark (which is probably how it makes such an intense color on dissolution). So what's the best purple solid in the lab?
I have to promote my personal favorite, chromium (III) chloride (image courtesy of the Wikipedia entry).
That's a pretty good shot, but it really should be experienced in person. The stuff is metallic purple flakes, weirdly reflective - it looks like it should be the color of a custom racer's hood, rather than anything you'd actually order from a chemical supply house. Now all I have to do is find a use for it in the lab. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
We know what clinical trial success rates have been like for the last twenty years or so (hint: not so good). Are things turning around, or not? This Nature Reviews Drug Discovery piece takes a look at the 2008-2010 data. It's not necessarily reassuring:
At present, however, Phase II success rates are lower than at any other phase of development. Analysis by the Centre for Medicines Research (CMR) of projects from a group of 16 companies (representing approximately 60% of global R&D spending) in the CMR International Global R&D database reveals that the Phase II success rates for new development projects have fallen from 28% (2006–2007) to 18% (2008–2009), although these success rates do vary between therapeutic areas and between small molecules and biologics.
There were 108 Phase II failures in 2008-2010, and for 87 of those we have a stated reason. Half of those were good old lack of efficacy, another 19% failed on safety grounds, and the rest failed for "strategic reasons". The best guess there is that the compounds seem to have been targeting areas where there was already competition, and they didn't differentiate themselves enough from the standard of care to be worth continuing. That's worth thinking about in the context of the arguments about "me-too" drugs. To hear some of the industry's critics tell it, there shouldn't be any such failures at all, since they seem to believe that even most marketed drugs really don't differentiate themselves from their competition as it is.
Nearly 70% of those 108 failures, by the way, were in four therapeutic areas: cardiovascular, CNS, metabolics, and oncology. (What we don't have are the failures adjusted for how many drugs were taken into the clinic in the first place in those areas). CNS and oncology are traditional high-risk areas, of course, and I think that a lot of the metabolics failures were in diabetes. That's a tough field - big market, but pretty well-served, making efficacy versus the standard of care a high bar to clear, and this while the FDA's safety requirements have gotten very stiff indeed.
But cardiovascular - that's interesting, since that area has traditionally had one of the better trial success rates. Perhaps that one is also suffering from the standard of care being pretty good (and often generic, or soon to be). So the high-success-rate mechanisms of the old days are well covered, leaving you to try your luck in the riskier ideas, while still trying to beat some pretty good (and pretty cheap) drugs. . .
Update: it's been suggested that some of these "strategic" failures are a sign of what happens during merger/acquisition activity. Could be, but you'd have to run these down company-by-company. I'll see if I can contact the authors of this paper about that idea. . .
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials
May 9, 2011
Chemists who don't (or don't yet) work in drug discovery often wonder just what sort of chemistry we do over here. There are a lot of jokes about methyl-ethyl-butyl-futile, which have a bit of an edge to them for people just coming out of a big-deal total synthesis group in academia. They wonder if they're really setting themselves up for a yawn-inducing lab career of Suzuki couplings and amide formation, gradually becoming leery of anything that takes more than three steps to make.
Well, now there's some hard data on that topic. The authors took the combined publication output from their company, Pfizer, and GSK, as published in the Journal of Medicinal Chemistry, Bioorganic Med Chem Letters and Bioorganic and Medicinal Chemistry, starting in 2008. And they analyzed this set for what kinds of reactions were used, how long the synthetic routes were, and what kinds of compounds were produced. Their motivation?
. . .discussions with other chemists have revealed that many of our drug discovery colleagues outside the synthetic community perceive our syntheses to consist of typically six steps, predominantly composed of amine deprotections to facilitate amide formation reactions and Suzuki couplings to produce biaryl derivatives. These “typical” syntheses invariably result in large, ﬂat, achiral derivatives destined for screening cascades. We believed these statements to be misconceptions, or at the very least exaggerations, but noted there was little if any hard evidence in the literature to support our case.
Six steps? You must really want those compounds, eh? At any rate, their data set ended up with about 7300 reactions and about 3600 compounds. And some clear trends showed up. For example, nearly half the reactions involved forming carbon-heteroatom bonds, with half of those (22% of the total) being acylations. mostly amide formation. But only about one tenth of the reactions were C-C bond-forming steps (40% of those were Suzuki-style couplings and 18% were Sonogoshira reactions). One-fifth were protecting group manipulations (almost entirely on COOH and amine groups), and eight per cent were heterocycle formation, and everything else was well down into the single digits.
There are some interesting trends in those other reactions, though. Reduction reactions are much more common than oxidations - the frequency of nitro-to-amine reductions is one factor behind that, followed by other groups down to amines (few of these are typically run in the other direction). Among those oxidations, alcohol-to-aldehyde is the favorite. Outside of changes in reduction state, alcohol-to-halide is the single most favorite functional group transformation, followed by acid to acid chloride, both of which make sense from their reactivity in later steps.
Overall, the single biggest reaction is. . .N-acylation to an amide. So that part of the stereotype is true. At the bottom of the list, with only one reaction apiece, were N-alkylation of an aniline, benzylic/allylic oxidation, and alkene oxidation. Sulfonation, nitration, and the Heck reaction were just barely represented as well.
Analyzing the compounds instead of the reactions, they found that 99% of the compounds contained at least one aromatic ring (with almost 40% showing an aryl-aryl linkage) and over half have an amide, which totals aren't going to do much to dispel the stereotypes, either. The most popular heteroaromatic ring is pyridine, followed by pyrimidine and then the most popular of the five-membered ones, pyrazole. 43% have an aliphatic amine, which I can well believe (in fact, I'm surprised that it's not even higher). Most of those are tertiary amines, and the most-represented of those are pyrrolidines, followed closely by piperazines.
In other functionality, about a third of the compounds have at least one fluorine atom in them, and 30% have an aryl chloride. In contrast to the amides, there are only about 10% of the compounds with sulfonamides. 35% have an aryl ether (mostly methoxy), 10% have an aliphatic alcohol (versus only 5% with a phenol). The least-represented functional groups (of the ones that show up at all!) are carbonate, sulfoxide, alkyl chloride, and aryl nitro, followed by amidines and thiols. There's not a single alkyl bromide or aliphatic nitro in the bunch.
The last part of the paper looks at synthetic complexity. About 3000 of the compounds were part of traceable synthetic schemes, and most of these were 3 and 4 steps long. (The distribution has a pretty long tail, though, going out past 10 steps). Molecular weights tend to peak at between 350 and 550, and clogP peaks at around 3.5 to 5. These all sound pretty plausible to me.
Now that we've got a reasonable med-chem snapshot, though, what does it tell us? I'm going to use a whole different post to go into that, but I think that my take-away was that, for the most part, we have a pretty accurate mental picture of the sorts of compounds we make. But is that a good picture, or not?
+ TrackBacks (0) | Category: Chemical News | Drug Development | Life in the Drug Labs | The Scientific Literature
May 6, 2011
I'm never sure of how useful these rankings are, but here's Thomson Reuters' rankings of the top 100 chemists of the last ten years. This is based on publications and their impact/citation rate.
Looking over the list, I think that there are some artifacts in it, and boy, don't metal-organic frameworks and nanotech just rate like crazy? But it's an interesting starting point for discussion, especially when you note how (relatively) few organic and synthetic organic chemists make the upper reaches. Thoughts?
+ TrackBacks (0) | Category: The Scientific Literature
PNAS recently came out with a special concentration of chemistry papers, and they're worth a look. The theme is the synthesis of chemical probes, which makes me think that maybe Stuart Schrieber can guest-edit an issue of Vogue next. Today I'm going to highlight one from the Broad Institute on diversity-oriented synthesis (DOS), and next week I'll get to some more.
OK, that was something of a come-on for regular readers of this site, who now will be listening for the sound of grinding wheels coming up to speed, the better to sharpen the Sword of Justice. I've said unfriendly things in the past about DOS and some of the claims made for it. The point of much of this work has been lost on me, and I'm a pretty broad-minded guy. (That word, in this case, rhymes with "sawed", not with "load"). The first flush (no aspersions meant) of papers in the field might just as well have been titled "Check It Out: A Bunch of Huge Compounds No One's Ever Made Before", and were followed up, in my mind, by landmark publications such as "A Raving Heapload of Structures You Didn't Want in the First Place" and "Dang, There Are Even More Compounds With Molecular Weight 850 Than We Thought". But does it have to be this way?
Maybe not. As I mentioned earlier this year, people are starting to compare DOS and fragment-based approaches. (I think that Nature dialog could have been more useful than it was, but it was a start). And this latest paper continues that process. It's using DOS approaches to generate smaller molecular weight compounds - fragments, actually. They're not tiny ones, more medium-to-large size by fragment-based standards, but they're under 300 MW.
And, importantly, they're deliberately designed to be three-dimensional - lots of pyrrolidines and fused-ring compounds thereof, homopiperidines, spiro-lactams, and so on. Many of the early fragment libraries (and many of the commercial ones that you can still buy) are too invested in small, flat, heterocycles. It's not that you can't get good leads from those things, but there's a lot more to life (and to molecular property space). This paper's collection is still a bit heavy on the alkenes to my taste (all those ring-closing metathesis reactions), but they've also reduced those for part of the library, which means that a screen of this collection will tell you if the olefin is a key structural feature or not. The alkenes themselves could serve as useful handles to build out from as well; a fragment hit with no ways to elaborate its structure isn't too useful.
As I said back in February, "I'd prefer that DOS collections not get quite so carried away, and explore new structural motifs more in the range of druglike space." That's exactly what this paper does, and I think its direction should be encouraged. This plays to the strengths of both approaches, rather than pushing either of them to the point where they break down.
+ TrackBacks (0) | Category: Chemical News | Drug Assays
May 5, 2011
I wrote a couple of years ago about corporate anthems, and my own horrifying experience with one. One of the comments mentioned Pfizer's "Excel and Exceed", which was said to have been pulled from YouTube, "possibly out of sheer embarrassment".
Well, it's back, courtesy of a disgruntled Pfizer employee. Some of you may well have already seen this one, but if you haven't, here's the work of someone with bad feelings about the company, time on their hands, and (most importantly) a copy of the uplifting theme song. I particularly like the Exubera inhaler jokes (bird feeder, etc.) But as for the music, well, you've been warned.
+ TrackBacks (0) | Category: Why Everyone Loves Us
So, when Lipitor goes generic later this year, it's Ranbaxy that's going to step in and make the big bucks for a few months, right? Well. . .there's room to wonder. Ranbaxy has had some severe regulatory problems, and other companies are trying to see if they can use those to their advantage. Fortune magazine has more:
You'd think that in this era of generic-drug dominance, making the transition to a nonbranded version of Pfizer's vaunted cholesterol-fighting statin would be smooth, or at least controlled. And indeed, that's precisely how it seemed -- until just a few months ago. Now the process appears to have unraveled, leaving serious questions about who will make the cheaper form of Lipitor, whether the price will really drop, and most disturbing of all, whether patients will be able to trust that the medication is safe. . .
+ TrackBacks (0) | Category: Regulatory Affairs
The "Opinionator" blog at the New York Times is trying here, but there's something not quite right. David Bornstein, in fact, gets off on the wrong foot entirely with this opening:
Consider two numbers: 800,000 and 21.
The first is the number of medical research papers that were published in 2008. The second is the number of new drugs that were approved by the Food and Drug Administration last year.
That’s an ocean of research producing treatments by the drop. Indeed, in recent decades, one of the most sobering realities in the field of biomedical research has been the fact that, despite significant increases in funding — as well as extraordinary advances in things like genomics, computerized molecular modeling, and drug screening and synthesization — the number of new treatments for illnesses that make it to market each year has flatlined at historically low levels.
Now, "synthesization" appears to be a new word, and it's not one that we've been waiting for, either. "Synthesis" is what we call it in the labs; I've never heard of synthesization in my life, and hope never to again. That's a minor point, perhaps, but it's an immediate giveaway that this piece is being written by someone who knows nothing about their chosen topic. How far would you keep reading an article that talked about mental health and psychosization? A sermon on the Book of Genesization? Right.
The point about drug approvals being flat is correct, of course, although not exactly news by now, But comparing it to the total number of medical papers published that same year is bizarre. Many of these papers have no bearing on the discovery of drugs, not even potentially. Even if you wanted to make such a comparison, you'd want to run the clock back at least twelve years to find the papers that might have influenced the current crop of drug approvals. All in all, it's a lurching start.
Things pick up a bit when Bornstein starts focusing on the Myelin Repair Foundation as an example of current ways to change drug discovery. (Perhaps it's just because he starts relaying information directly that he's been given?) The MRF is an interesting organization that's obviously working on a very tough problem - having tried to make neurons grow and repair themselves more than once in my career, I can testify that it's most definitely nontrivial. And the article tries to make a big distinction between they way that they're funding research as opposed to the "traditional NIH way".
The primary mechanism for getting funding for biomedical research is to write a grant proposal and submit it to the N.I.H. or a large foundation. Proposals are reviewed by scientists, who decide which ones are most likely to produce novel discoveries. Only a fraction get funded and there is little encouragement for investigators to coordinate research with other laboratories. Discoveries are kept quiet until they are published in peer-reviewed journals, so other scientists learn about them only after a delay of years. In theory, once findings are published, they will be picked up by pharmaceutical companies. In practice, that doesn’t happen nearly as often as it should.
Now we're back to what I'm starting to think of as the "translational research fallacy". I wrote about that here; it's the belief that there are all kinds of great ideas and leads in drug discovery that are sitting on the shelf, because no one in the industry has bothered to take a look. And while it's true that some things do slip past, I'm really not sure that I can buy into this whole worldview. My belief is that many of these things are not as immediately actionable as their academic discoverers believe them to be, for one thing. (And as for the ones that clearly are, those are worth starting a company around, right?) There's also the problem that not all of these discoveries can even be reproduced.
Bornstein's article does get it right about this topic, though:
What’s missing? For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.
“Pure science is what you’re rewarded for,” notes Dr. Barres. “That’s what you get promoted for. That’s what they give the Nobel Prizes for. And yet developing a drug is a hundred times harder than getting a Nobel Prize. . .
That kind of research is what a lot of us spend all our days doing, and there's plenty of work to fill them. As for developing a drug being harder than getting a Nobel Prize, well, apples and oranges, but there's something to it, still. The drug will cost you a lot more money along the way, but with the potential of making a lot more at the end. Bornstein's article goes off the rails again, though, when he says that companies are reluctant to go into this kind of work when someone else owns the IP rights. That's technically true, but overall, the Bayh-Dole Act on commercialization of academic research (despite complications) has brought many more discoveries to light than it's hindered, I'd say. And he's also off base about how this is the reason that drug companies make "me too" compounds. No, it's not because we don't have enough ideas to work on, unfortunately. It's because most of them (and more over the years) don't go anywhere.
Bornstein's going to do a follow-up piece focusing more on the Myelin Repair people, so I'll revisit the topic then. What I'm seeing so far is an earnest, well-meaning attempt to figure out what's going on with drug discovery - but it's not a topic that admits of many easy answers. That's a problem for journalists, and a problem for those of us who do it, too.
+ TrackBacks (0) | Category: "Me Too" Drugs | Academia (vs. Industry) | Drug Development | Who Discovers and Why
May 4, 2011
Jim Edwards at Bnet has a report that GlaxoSmithKline doesn't seem to be doing quite as well selling Alli (orlistat) as they'd planned. This notwithstanding that their CEO, Andrew Witty, has said that they've had some interest from outside buyers for the franchise.
No, if you run the numbers, it's hard to see how GSK is making any money at all from the drug, especially if sales figures went down last year they way they'd gone down the year before. But then, it's not that the company is telling us those numbers, which might tell you something right there. How could anyone have predicted such a thing?
+ TrackBacks (0) | Category: Diabetes and Obesity
There's a new study out looking at the prevalence of autism across different age groups across the United Kingdom. Since autism shows up in childhood, if the rate of its occurrence had changed over the years, that would be expected to be preserved in the population as you move up in years. But it doesn't.
It absolutely doesn't. Despite report after report of an "autism epidemic", what this study supports is the idea of an increase in diagnosis, not in the underlying condition. None of the adults surveyed who fit the autism criteria had any idea that they did so: they never knew that they were autistic, and had never been diagnosed. (I've no doubt, though, that they or their neighbors were aware of their seemingly eccentric personalities). These people also turned out to be generally socially and economically disadvantaged, which given what they've had to work through, I can well believe.
But there was no change related to age group. The demographics were what you find in current children: about 1% of the total population, males much more common than females. No change. No sign of an epidemic. But it won't change a thing for those people who are convinced that one exists; they'll already be out there today telling everyone about the flaws in this study, its biased nature, its gaps and omissions. Dark forces will be alluded to, huge conspiracies - and if you doubt that, just watch the comments to this post, because I'll probably attract some of these people, too.
+ TrackBacks (0) | Category: Autism
May 3, 2011
This piece is too short to excerpt, so just Read The Whole Thing, as they say. The headline is "Pfizer Breaks Psychological Need To Always Seek FDA's Approval". And yes, it's The Onion.
+ TrackBacks (0) | Category: Regulatory Affairs
So the results are in from that Lucentis-vs-Avastin comparison (known as CATT, Comparison of AMD Treatment Trials), and I'd say that they came out the way people were expecting: monthly injections of either antibody give the same end results, as measured by vision testing. There are some slight differences between the two when retinal thickness is measured, but that hasn't shown up in the end result (visual impairment). There's another year of follow-up ongoing, and perhaps that will show something (or perhaps not). For now, the outcome appears to be the same.
Another interesting feature of this study is that it compared regular monthly treatment with either drug to an "as-needed" dosing schedule. In this case Lucentis performed equally well by either schedule, with monthly Avastin equivalent, but (interestingly) as-needed Avastin dosing was, in fact, inferior. These protocols need fewer injections (and less Lucentis), but more imaging of the retina, along with more judgment calls on the part of physicians, so the cost savings there will remain to be seen. Savings on injections into the eyes, though, would surely be welcome - it's too bad that Avastin didn't perform as well that way.
As the editorial in the NEJM summed it up:
Health care providers and payers worldwide will now have to justify the cost of using ranibizumab. Regulators in certain countries will be forced to reconsider their policies that make it illegal to use drugs off-label, particularly when so many of their citizens cannot afford ranibizumab. The CATT data support the continued global use of intravitreal bevacizumab as an effective, low-cost alternative to ranibizumab.
The only thing that could flip this around is if the second year of CATT produces some new data, or if the ongoing European trials turn up some safety data that this study wasn't powered to pick up.
More here at the In Vivo Blog. BioCentury also did a good write-up on this one for their subscribers - they interviewed a number of opthamology practitioners, and the voting looks solidly in favor of using the much less expensive Avastin. One South Carolina practice reported that, because of the state's sales tax on physician-administered drugs, that they pay $140 in tax for every injection of Lucentis, while getting reimbursed $120 by Medicare for doing it, which doesn't sound like much of a way to make a living. Still, as the newsletter points out, off-label Avastin use (which would be legal) involves repackaging what was a single-dose container, and that part is technically in violation of the law. Buthe agency doesn't want to get in the way of freedom of medical practice, and seems to be letting that trump the repackaging/compounding concerns.
+ TrackBacks (0) | Category: Clinical Trials | Drug Prices
Now here's a comparison that you don't get to see very often: how much do two large pharma compound collections overlap? There's a paper going into just that question in the wake of the 2006-2007 merger between Bayer and Schering AG. (By two coincidences, this paper is in the same feed as the one that I highlighted yesterday, and that merger is the one that closed my former research site out from under me).
Pre-merger, Bayer had over two million structures in its corporate collection, and Schering AG had just under 900,000. Both companies had undertaken recent library clean-up programs, clearing out undesirable compounds and adding both purchased and in-house diversity structures. Interestingly, it turns out that just under 50,000 structures were duplicated across both collections, about 1.5% of the total. Almost all of these duplicates were purchased compounds; only 2,000 of them had been synthesized in-house. And even most of those turned out to be from combichem programs or were synthetic intermediates - there was almost no overlap at all in submitted med-chem compounds.
Various measures of structural complexity and similarity backed up those numbers. The two collections were surprisingly different, which might well have something to do with the different therapeutic areas the two companies had focused on over the years. The Bayer compounds tended to run higher in molecular weight, rotatable bonds, and clogP, but then, a higher percentage of the Schering AG compounds were purchased with such filters already in place. As for undesirable structures, only about 2% of the Bayer collection and 1% of the Schering AG compounds were considered to be real offenders. I hope none of those were mine; I contributed quite a few compounds to the collection over the years, but they were, for the most part, relatively sane.
The paper's conclusion can be read in more than one way:
Furthermore, an argument that might support mergers and acquisitions (M&A) in the pharmaceutical sector can be harvested from this analysis. Currently, M&As in this industry are driven by product portfolios rather than by drug discovery competencies. With the current need for innovative drugs, R&D skills of pharmaceutical companies might again become more important. The technological complementarity of two companies is often quoted as an important factor for successful M&As in the long term. If compound libraries are regarded as a kind of company knowledge-base, then a high degree of complementarity is clearly desirable and would improve drug discovery skills. Based on our data, the libraries of BHC and SAG are structurally complementary and fit together well in terms of their physico-chemical properties. However, it remains to be proven if this leads to additional innovative products.
Not so sure about that, myself. I don't know how good a proxy the compound collections are, since the represent an historical record as much as they do the current state of a company. And that paragraph glosses over the effect of mergers on R&D itself - it's not like just adding pieces together, that's for sure. The track record for mergers generating "additional innovative products" is not good. We'll see how the Bayer-Schering one holds up. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Assays | Drug Industry History
May 2, 2011
I don't know if this DOI link resolves yet - or if the problem will be fixed by the time it does. But for now, in the "Articles in Press" queue over at Drug Discovery Today, they have one whose title reads like this:
Utility of protein structures in overcoming ADMET-related issues of drug-like compounds[1. AU: You use ADMET-relevant throughout manuscript. Would you like to change this to ADMET-relevant too?]
Well, would you? They're waiting for someone to answer them, apparently.
+ TrackBacks (0) | Category: The Scientific Literature
Matthew Herper has a good piece over in Forbes on the speculation that Pfizer might devolve. Here's his breakdown of how five (or so) separate Pfizer-derived companies could be worth substantially more than the current entity.
But, as he notes, we're talking about several different things here. Were I a long-suffering Pfizer shareholder (which, outside of index funds, I have tried not to be), I would have one perspective on this, similar to this one. It would all be about the stock price:
“The stock can only go up if they break up the company and cut research and development,” says Jami Rubin, a pharmaceuticals analyst at Goldman Sachs who has been pushing a Pfizer breakup for three years. “When Read was announced as the new chief executive Wall Street was skeptical, but he’s listening and he’s responding to what we have been saying. My sense is he’s already made up his mind.”
As an observer of (and participant in) the drug industry, though, I have other views, and they're more like these:
Not everyone agrees that a breakup is the right fix for Pfizer, which has struggled to invent new blockbusters even as it acquired Warner-Lambert for $114 billion in 2000, Pharmacia for $60 billion in 2003 and Wyeth for $68 billion in 2009. Those big mergers sidetracked its researchers and salespeople and created baroque management structures—at one point there were 17 layers between the chief executive and the lowest employee. Critics say undoing them risks similar distraction. As one fund manager said, a breakup would just mean the investment bankers and lawyers who got rich putting Pfizer together will now get richer taking it apart, without improving its ability to invent and market drugs, already a struggle. “I think it’s financial engineering. I think it makes the stock more valuable,” says Les Funtleyder, a fund manager at Miller Tabak. “From a strategic point of view, would it solve the problem? No.”
That's the problem, all right. I've made this point in various ways over the years, but let me be as blunt as possible: I think that Pfizer's consolidation, both of large companies and of small ones, has been a disaster for drug discovery in general. Just the sheer loss of intellectual diversity is enough to call it that. And the resulting huge, ugly omelet cannot be unscrambled. The disruptions in all those research organizations can never be undone, not without a fleet of fully powered time machines.
It will give many people (I'm one) some cold satisfaction to see the company reverse course, admit that the mega-merger strategy has been a mistake all along, and painfully retrace its steps. But that's not much compensation, is it? Not compared to what's been lost.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
John Donne's observation does not, it seems, hold for every case. I don't feel particularly diminished at all.
+ TrackBacks (0) | Category: Current Events