About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
Snake Oil |
The Central Nervous System
The Dark Side
April 8, 2014
Here's an article by Steve Perrin, at the ALS Therapy Development Institute, and you can tell that he's a pretty frustrated guy. With good reason.
That chart shows why. Those are attempted replicates of putative ALS drugs, and you can see that there's a bit of a discrepancy here and there. One problem is poorly run mouse studies, and the TDI has been trying to do something about that:
After nearly a decade of validation work, the ALS TDI introduced guidelines that should reduce the number of false positives in preclinical studies and so prevent unwarranted clinical trials. The recommendations, which pertain to other diseases too, include: rigorously assessing animals' physical and biochemical traits in terms of human disease; characterizing when disease symptoms and death occur and being alert to unexpected variation; and creating a mathematical model to aid experimental design, including how many mice must be included in a study. It is astonishing how often such straightforward steps are overlooked. It is hard to find a publication, for example, in which a preclinical animal study is backed by statistical models to minimize experimental noise.
All true, and we'd be a lot better off if such recommendations were followed more often. Crappy animal data is far worse than no animal data at all. But the other part of the problem is that the mouse models of ALS aren't very good:
. . .Mouse models expressing a mutant form of the RNA binding protein TDP43 show hallmark features of ALS: loss of motor neurons, protein aggregation and progressive muscle atrophy.
But further study of these mice revealed key differences. In patients (and in established mouse models), paralysis progresses over time. However, we did not observe this progression in TDP43-mutant mice. Measurements of gait and grip strength showed that their muscle deficits were in fact mild, and post-mortem examination found that the animals died not of progressive muscle atrophy, but of acute bowel obstruction caused by deterioration of smooth muscles in the gut. Although the existing TDP43-mutant mice may be useful for studying drugs' effects on certain disease mechanisms, a drug's ability to extend survival would most probably be irrelevant to people.
A big problem is that the recent emphasis on translational research in academia is going to land many labs right into these problems. As the rest of that Nature article shows, the ways for a mouse study to go wrong are many, various, and subtle. If you don't pay very close attention, and have people who know what to pay attention to, you could be wasting time, money, and animals to generate data that will go on to waste still more of all three. I'd strongly urge anyone doing rodent studies, and especially labs that haven't done or commissioned very many of them before, to read up on these issues in detail. It slows things down, true, and it costs money. But there are worse things.
+ TrackBacks (0) | Category: Animal Testing | The Central Nervous System
March 28, 2014
Huntington's is a terrible disease. It's the perfect example of how genomics can only take you so far. We've known since 1993 what the gene is that's mutated in the disease, and we know the protein that it codes for (Huntingtin). We even know what seems to be wrong with the protein - it has a repeating chain of glutamines on one end. If your tail of glutamines is less than about 35 repeats, then you're not going to get the disease. If you have 36 to 39 repeats, you are in trouble, and may very well come down with the less severe end of Huntington's. If there are 40 or more, doubt is tragically removed.
So we can tell, with great precision, if someone is going to come down with Huntington's, but we can't do a damn thing about it. That's because despite a great deal of work, we don't really understand the molecular mechanism at work. This mutated gene codes for this defective protein, but we don't know what it is about that protein that causes particular regions of the brain to deteriorate. No one knows what all of Huntingtin's functions are, and not for lack of trying, and multiple attempts to map out its interactions (and determine how they're altered by a too-long N-terminal glutamine tail) have not given a definite answer.
But maybe, as of this week, that's changed. Solomon Snyder's group at Johns Hopkins has a paper out in Nature that suggests an actual mechanism. They believe that mutant Huntingtin binds (inappropriately) a transcription factor called "specificity protein 1", which is known to be a major player in neurons. Among other things, it's responsible for initiating transcription of the gene for an enzyme called cystathionine γ-lyase. That, in turn, is responsible for the last step in cysteine biosynthesis, and put together, all this suggests a brain-specific depletion of cysteine. Update: this could have numerous downstream consequences - this is the pathway that produces hydrogen sulfide, which the Snyder group has shown is an important neurotransmitter (one of several they've discovered), and it's also involved in synthesizing glutathione. Cysteine itself is, of course, often a crucial amino acid in many protein structures as well.)
Snyder is proposing this as the actual mechanism of Huntington's, and they have shown, in human tissue culture and in mouse models of the disease, that supplementation with extra cysteine can stop or reverse the cellular signs of the disease. This is a very plausible theory (it seems to me), and the paper makes a very strong case for it. It should lead to immediate consequences in the clinic, and in the labs researching possible therapies for the disease. And one hopes that it will lead to immediate consequences for Huntington's patients themselves. If I knew someone with the Huntingtin mutation, I believe that I would tell them to waste no time taking cysteine supplements, in the hopes that some of it will reach the brain.
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
March 14, 2014
Here's a brave attempt to look for genetic markers of bipolar disorder. The authors studied 388 Old Order Amish sufferers, doing thorough SNP analysis on the lot and total sequencing on fifty of them. There were many parent-child relationships in the set, which gave a chance for further discrimination. And the result:
. . .despite the in-depth genomic characterization of this unique, large and multigenerational pedigree from a genetic isolate, there was no convergence of evidence implicating a particular set of risk loci or common pathways. The striking haplotype and locus heterogeneity we observed has profound implications for the design of studies of bipolar and other related disorders.
If you look around the literature, you'll find numerous smaller studies also trying to find genetic markers for bipolar disorder, and many of these propose possible candidate loci. But very few of them seem to agree, and this new study doesn't seem to confirm any of them. The authors hold out some hope for still larger cohorts and more comprehensive sequencing, and that's certainly the way to go. But if there were anything close to a simple genetic sequence for bipolar disorder, it would have been found by now. Like many other diseases (and not just those of the central nervous system), it's probably a phenotype that can be realized by a whole range of mechanisms, an alternate state of physiology that the system can slip into through a combination of genetic and environmental effects. And while there there may not be a thousand ways to get there, there sure aren't just a couple.
Dealing with these "network diseases" is going to keep us busy for quite a while to come. The best hope, as far as I can see, is for less complexity downstream. Maybe these various susceptibilities and tendencies all slide towards a similar disease process which can be modified. Looking back to the genetic causes for understanding sure hasn't worked out so far; maybe advances in studying brain function and patterns of neurotransmission will shed some light. Although if you're having to look to that area to bail you out. . .
+ TrackBacks (0) | Category: The Central Nervous System
December 5, 2013
I've been meaning to link to this piece by Lauren Wolf in C&E News on the connections between Parkinson's disease and environmental exposure to mitochondrial toxins. (PDF version available here). Links between environmental toxins and disease are drawn all the time, of course, sometimes with very good reason, but often when there seems to be little evidence. In this case, though, since we have the incontrovertible example of MPTP to work from, things have to be taken seriously. Wolf's article is long, detailed, and covers a lot of ground.
The conclusion seems to be that some people may well be genetically more susceptible to such exposures. A lot of people with Parkinson's have never really had much pesticide exposure, and a lot of people who've worked with pesticides never show any signs of Parkinson's. But there could well be a vulnerable population that bridges these two.
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
December 3, 2013
The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.
The piece is pretty accurate about drug research, I have to say:
John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”
The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).
“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”
I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.
The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).
Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.
Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”
There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).
few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.
The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.
Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”
There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System
November 8, 2013
So Bristol-Myers Squibb did indeed re-org itself yesterday, with the loss of about 75 jobs (and the shifting around of 300 more, which will probably result in some job losses as well, since not everyone is going to be able to do that). And they announced that they're getting out of two therapeutic areas, diabetes and neuroscience.
Those would be for very different reasons. Neuro is famously difficult and specialized. There are huge opportunities there, but they're opportunities because no one's been able to do much with them, for a lot of good reasons. Some of the biggest tar pits of drug discovery are to be found there (Alzheimer's, chronic pain), and even the diseases for which we have some treatments are near-total black boxes, mechanistically (schizophrenia, epilepsy and seizures). The animal models are mysterious and often misleading, and the clinical trials for the biggest diseases in this area are well-known to be expensive and tricky to run. You've got your work cut out for you over here.
Meanwhile, the field of diabetes and metabolic disorders is better served. For type I diabetes, the main thing you can do, short of finding ever more precise ways of dosing insulin, is to figure out how to restore islet function and cure it, and that's where all the effort seems to be going. For type II diabetes, which is unfortunately a large market and getting larger all the time, there are a number of therapeutic options. And while there's probably room for still more, the field is getting undeniably a bit crowded. Add that to the very stringent cardiovascular safety requirements, and you're looking at a therapeutic that's not as attractive for new drug development as it was ten or fifteen years ago.
So I can see why a company would get out of these two areas, although it's also easy to think that it's a shame for this to happen. Neuroscience is in a particularly tough spot. The combination of uncertainly and big opportunities would tend to draw a lot of risk-taking startups to the area, but the massive clinical trials needed make it nearly impossible for a small company to get serious traction. So what we've been seeing are startups that, even more than other areas, are focused on getting to the point that a larger company will step in to pay the bills. That's not an abnormal business model, but it has its hazards, chief among them the temptation to run what trials you can with a primary goal of getting shiny numbers (and shiny funding) rather than finding out whether the drug has a more solid chance of working. Semi-delusional Phase II trials are a problem throughout the industry, but more so here.
+ TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development | The Central Nervous System
October 29, 2013
Medicinal chemists talk a lot more about residence time and off rate than they used to. It's become clear that (at least in some cases) a key part of a drug's action is its kinetic behavior, specifically how quickly it leaves its binding site. You'd think that this would correlate well with its potency, but that's not necessarily so. Binding constants are a mix of on- and off-rates, and you can get to the same number by a variety of different means. Only if you're looking at very similar compounds with the same binding modes can you expect the correlation your intuition is telling you about, and even then you don't always get it.
There's a new paper in J. Med. Chem. from a team at Boehringer Ingelheim that takes a detailed look at this effect. The authors are working out the binding to the muscarinic receptor ligand tiotropium, which has been around a long time. (Boehringer's efforts in the muscarinic field have been around a long time, too, come to think of it). Tiotropium binds to the m2 subtype with a Ki of 0.2 nM, and to the m3 subtype with a Ki of 0.1 nM. But the compound has a much slower off rate on the m3 subtype, enough to make it physiologically distinct as an m3 ligand. Tiotropium is better known by its brand name Spiriva, and if its functional selectivity at the m3 receptors in the lungs wasn't pretty tight, it wouldn't be a drug. By carefully modifying its structure and introducing mutations into the receptor, this group hoped to figure out just why it's able to work the way it does.
The static details of tiotropium binding are well worked out - in fact, there's a recent X-ray structure, adding to the list of GPCRs that have been investigated by X-ray crystallography. There are plenty of interactions, as those binding constants would suggest:
The orthosteric binding sites of hM3R and hM2R are virtually identical. The positively charged headgroup of the antimuscarinic agent binds to (in the class of amine receptors highly conserved) Asp3.32 (D1483.32) and is surrounded by an aromatic cage consisting of Y1493.33, W5046.48, Y5076.51, Y5307.39, and Y5347.43. In addition to that, the aromatic substructures of the ligands dig into a hydrophobic region close to W2004.57 and the hydroxy groups, together with the ester groups, are bidentally interacting with N5086.52, forming close to optimal double hydrogen bonds. . .
The similarity of these binding sites was brought home to me personally when I was working on making selective antagonists of these myself. (If you want a real challenge, try differentiating m2 and m4). The authors point out, though, and crucially, that if you want to understand how different compounds bind to these receptors, the static pictures you get from X-ray structures are not enough. Homology modeling helps a good deal, but only if you take its results as indicators of dynamic processes, and not just swapping out residues in a framework.
Doing point-by-point single changes in both the tiotropium structure and the the receptor residues lets you use the kinetic data to your advantage. Such similar compounds should have similar modes of dissociation from the binding site. You can then compare off-rates to the binding constants, looking for the ones that deviate from the expected linear relationship. What they find is that the first event when tiotropium leaves the binding site is the opening of the aromatic cage mentioned above. Mutating any of these residues led to a big effect on the off-rate compared to the effect on the binding constant. Mutations further up along the tunnel leading to the binding site behaved in the same way: pretty much identical Ki values, but enhanced off-rates.
These observations, the paper says with commendable honesty, don't help the medicinal chemists all that much in designing compounds with better kinetics. You can imagine finding a compound that takes better advantage of this binding (maybe), but you can also imagine spending a lot of time trying to do that. The interaction with the asapragine at residue 508 is more useful from a drug design standpoint:
Our data provide evidence that the double hydrogen interaction of N5086.52 with tiotropium has a crucial influence on the off-rates beyond its influence on Ki. Mutation of N5086.52 to alanine accelerates the dissociation of tiotropium more than 1 order of magnitude than suggested by the Ki. Consequently, tiotropium derivatives devoid of the interacting hydroxy group show overproportionally short half-lives. Microsecond MD simulations show that this double hydrogen bonded interaction hinders tiotropium from moving into the exit channel by reducing the frequency of tyrosine-lid opening movements. Taken together, our data show that the interaction with N5086.52 is indeed an essential prerequisite for the development of slowly dissociating muscarinic receptor inverse agonists. This hypothesis is corroborated by the a posteriori observation that the only highly conserved substructure of all long-acting antimuscarinic agents currently in clinical development or already on the market is the hydroxy group.
But the extracellular loops also get into the act. The m2 subtype's nearby loop seems to be more flexible than the one in m3, and there's a lysine in the m3 that probably contributes some electrostatic repulsion to the charged tiotropium as it tries to back out of the protein. That's another effect that's hard to take advantage of, since the charged region of the molecule is a key for binding down in the active site, and messing with it would probably not pay dividends.
But there are some good take-aways from this paper. The authors note that the X-ray structure, while valuable, seems to have large confirmed the data generated by mutagenesis (as well it should). So if you're willing to do lots of point mutations, on both your ligand and your protein, you can (in theory) work some of these fine details out. Molecular dynamics simulations would seem to be of help here, too, also in theory. I'd be interested to hear if people can corroborate that with real-world experience.
+ TrackBacks (0) | Category: Drug Assays | In Silico | Pharmacokinetics | The Central Nervous System
October 23, 2013
G-protein coupled receptors are one of those areas that I used to think I understood, until I understood them better. These things are very far from being on/off light switches mounted in drywall - they have a lot of different signaling mechanisms, and none of them are simple, either.
One of those that's been known for a long time, but remains quite murky, is allosteric modulation. There are many compounds known that clearly are not binding at the actual ligand site in some types of GPCR, but (equally clearly) can affect their signaling by binding to them somewhere else. So receptors have allosteric sites - but what do they do? And what ligands naturally bind to them (if any)? And by what mechanism does that binding modulate the downstream signaling, and are there effects that we can take advantage of as medicinal chemists? Open questions, all of them.
There's a new paper in Nature that tries to make sense of this, and trying by what might be the most difficult way possible: through computational modeling. Not all that long ago, this might well have been a fool's errand. But we're learning a lot about the details of GPCR structure from the recent X-ray work, and we're also able to handle a lot more computational load than we used to. That's particularly true if we are David Shaw and the D. E. Shaw company, part of the not-all-that-roomy Venn diagram intersection of quantitative Wall Street traders and computational chemists. Shaw has the resources to put together some serious hardware and software, and a team of people to make sure that the processing units get frequent exercise.
They're looking at the muscarinic M2 receptor, an old friend of mine for which I produced I-know-not-how-many antagonist candidates about twenty years ago. The allosteric region is up near the surface of the receptor, about 15A from the acetylcholine binding site, and it looks like all the compounds that bind up there do so via cation/pi interactions with aromatic residues in the protein. (That holds true for compounds as diverse as gallamine, alcuronium, and strychnine), and the one shown in the figure. This is very much in line with SAR and mutagenesis results over the years, but there are some key differences. Many people had thought that the aromatic groups of the ligands the receptors must have been interacting, but this doesn't seem to be the case. There also don't seem to be any interactions between the positively charged parts of the ligands and anionic residues on nearby loops of the protein (which is a rationale I remember from my days in the muscarinic field).
The simulations suggest that the two sites are very much in communication with each other. The width and conformation of the extracellular vestibule space can change according to what allosteric ligand occupies it, and this affects whether the effect on regular ligand binding is positive or negative, and to what degree. There can also, in some cases, be direct electrostatic interactions between the two ligands, for the larger allosteric compounds. I was very glad to see that the Shaw group's simulations suggested some experiments: one set with modified ligands, which would be predicted to affect the receptor in defined ways, and another set with point mutations in the receptor, which would be predicted to change the activities of the known ligands. These experiments were carried out by co-authors at Monash University in Australia, and (gratifyingly) seem to confirm the model. Too many computational papers (and to be fair, too many non-computational papers) don't get quite to the "We made some predictions and put our ideas to the test" stage, and I'm glad this one does.
+ TrackBacks (0) | Category: Biological News | In Silico | The Central Nervous System
October 21, 2013
The orphan-drug model is a popular one in the biopharma business these days. But like every other style of business, it has something-for-nothing artists waiting around it. Take a look at this article by Adam Feuerstein on Catalyst Pharmaceuticals, and see what category you think they belong in.
They're developing a compound called Firdapse for Lambert-Eaton Myasthenic Syndrome (LEMS), a rare neuromuscular disorder. It's caused by an autoimmune response to one set of voltage-gated calcium channels in the peripheral nervous system. Right now, the treatments for the condition that seem to provide much benefit are intravenous immunoglobin and 3,4-diaminopyridine (DAP). That latter compound is a potassium channel blocker that allows calcium to accumulate intracellularly in neurons and thus counteracts some of the loss of function in the system.
DAP is not an FDA-approved treatment, but it's officially under study at a number of medical centers, and the FDA is allowing it to be given to patients under a compassionate-use protocol. It's supplied, free of charge, by a small company in New Jersey, Jacobus Pharmaceuticals, who got into the area through a request from the Muscular Dystrophy Association. So how well does Firdapse work compared to this existing drug? Pretty much the same, because it's the same damn compound.
Yep, this is another one of those unexpected-regulatory-effects stories, such as happened with colchicine and with hydroxyprogesterone. The FDA has wanted to get as many therapies as possible through the actual regulatory process, and has provided a marked-exclusivity incentive for companies willing to do the trials needed. But if you're going to offer incentives, you need to think carefully about what you're giving people an incentive to do. In this case, the door is open for a company to step in, pick up an existing drug that is being given away to patients for free, a compound that it has spent no money discovering and no money developing, run the fastest trial possible with it, and then jack the price up to whatever the insurance companies might be able to pay. Now, pricing drugs at what the market will pay for them is fine by me. But that's supposed to be a reward for taking on the risk of discovering them and getting them through the approval process. This Catalyst case is another short-circuit in the system, a perverse incentive that some people seem to have no shame about taking advantage of. A similar situation has taken place in the EU with DAP and Biomarin Pharmaceuticals.
The LEMS patient community is not a large one, and they seem to be getting the word out for people to not sign up for Catalyst's clinical trials. Jacobus themselves have realized what's going on, and are running a trial of their own, hoping to file before Catalyst does and pick up the market exclusivity for themselves, so they can continue to supply the compound at the current price: nothing.
It's worth taking a minute to contrast this situation with Biogen's Tecfidera. That's another very small molecule (dimethyl fumarate) being given to patients with a neurological disease. It's also expensive. But in this case, MS patients had not been taking dimethyl fumarate for years (to the best of my knowledge). It was not already in the medical literature as an effective treatment (the way DAP is already there for LEMS). Biogen bought the company with the rights (Fumapharm) and took on the expense of the clinical trials, taking the risk that things might not work out at all. A lot of stuff doesn't. And they're pricing their drug according to what the market will pay, because they also have to fund the many other projects they're working on, most of which can be expected to wipe out at some point.
So how does a situation like Catalyst and DAP affect the drug companies who actually do research? Not too much, you might think, and they apparently think so, too, because I don't recall any statements about any of these cases so far from that end of the industry. They may not want to take any stands that call into question the ability of a company to set the price of its drugs according to what it thinks the market will bear. But since we are not, last I saw, living in some sort of radical libertarian free-for-all, it would be worth remembering that the ability to set such prices is not some sort of inalienable right. It can be restricted or even abrogated entirely by governments all around the world. And one way to get that to happen is for these governments and (in the democratic states, their constituents) to feel as if they're being taken advantage of by a bunch of cynical manipulators.
+ TrackBacks (0) | Category: Drug Prices | Regulatory Affairs | The Central Nervous System
October 11, 2013
The British press (and to a lesser extent, the US one) was full of reports the other day about some startling breakthrough in Alzheimer's research. We could certainly use one, but is this it? What would an Alzheimer's breakthrough look like, anyway?
Given the complexity of the disease, and the difficulty of extrapolating from its putative animal models, I think that the only way you can be sure that there's been a breakthrough in Alzheimer's is when you see things happening in human clinical trials. Until then, things are interesting, or suggestive, or opening up new possibilities, what have you. But in this disease, breakthroughs happen in humans.
This latest news is nowhere close. That's not to say it's not very interesting - it certainly is, and it doesn't deserve the backlash it'll get from the eye-rolling headlines the press wrote for it. The paper that started all this hype looked at mice infected with a prion disease, which led inexorably to neurodegeneration and death. They seem to have significantly slowed that degenerative cascade (details below), and that really is a significant result. The mechanism behind this, the "unfolded protein response" (UPR) could well be general enough to benefit a number of misfolded-protein diseases, which include Alzheimer's, Parkinson's, and Huntington's, among others. (If you don't have access to the paper, this is a good summary).
The UPR, which is a highly conserved pathway, senses an accumulation of misfolded proteins inside the endoplasmic reticulum. If you want to set it off, just expose the cells you're studying to Brefeldin A; that's its mechanism. The UPR has two main components: a shutdown of translation (and thus further protein synthesis), and an increase in chaperones to try to get the folding pathways back on track. (If neither of these do the trick, things will eventually shunt over to apoptosis, so the UPR can be seen as an attempt to avoid having the apoptotic detonator switch set off too often.
Shutting down translation causes cell cycle arrest, as well it might, and there's a lot of evidence that it's mediated by PERK, the Protein kinase RNA-like Endoplasmic Reticulum Kinase. The team that reported this latest result had previously shown that two different genetic manipulations of this pathway could mediate prion disease in what I think is the exact same animal model. If you missed the wild excited headlines when that one came out, well, you're not alone - I don't remember there being any. Is it that when something comes along that involves treatment with a small molecule, it looks more real? We medicinal chemists should take our compliments where we can get them.
That is the difference between that earlier paper and this new one. It uses a small-molecule PERK inhibitor (GSK2606414), whose discovery and SAR is detailed here. And this pharmacological PERK inhibition recapitulated the siRNA and gain-of-function experiments very well. Treated mice did show some behavioralthis really does look quite solid, and establishes the whole PERK end of the UPR as a very interesting field to work in.
The problem is, getting a PERK inhibitor to perform in humans will not be easy. That GSK inhibitor, unfortunately, has side effects that killed it as a development compound. PERK also seems to be a key component of insulin secretion, and in this latest study, the team did indeed see elevated blood glucose and pronounced weight loss, to the point that that treated mice eventually had to be sacrificed. Frustratingly, PERK inhibition might actually be a target to treat insulin resistance in peripheral tissue, so if you could just keep an inhibitor out of the pancreas, you might be in business. Good luck with that. I can't imagine how you'd do it.
But there may well be other targets in the PERK-driven pathways that are better arranged for us, and that, I'd think, is where the research is going to swing next. This is a very interesting field, with a lot of promise. But those headlines! First of all, prion disease is not exactly a solid model for Alzheimer's or Parkinson's. Since this pathway works all the way back at the stage of protein misfolding, it might be just the thing to uncover the similarities in the clinic, but that remains to be proven in human trials. There are a lot of things that could go wrong, many of which we probably don't even realize yet. And as just detailed above, the specific inhibitor being used here is strictly a tool compound all the way - there's no way it can go into humans, as some of the news stories got around to mentioning in later paragraphs. Figuring out something that can is going to take significant amount of effort, and many years of work. Headlines may be in short supply along the way.
+ TrackBacks (0) | Category: Press Coverage | The Central Nervous System
August 29, 2013
As someone who will not be seeing the age of 50 again, I find a good deal of hope in a study out this week from Eric Kandel and co-workers at Columbia. In Science Translational Medicine, they report results from a gene expression study in human brain samples. Looking at the dentate gyrus region of the hippocampus, long known to be crucial in memory formation and retrieval, they found several proteins to have differential expression in younger tissue samples versus older ones. Both sets were from otherwise healthy individuals - no Alzheimer's, for example.
RbAp48 (also known as RBBP4 and NURF55), a protein involved in histone deacetylation and chromatin remodeling, stood out in particular. It was markedly decreased in the samples from older patients, and the same pattern was seen for the homologous mouse protein. Going into mice as a model system, the paper shows that knocking down the protein in younger mice causes them to show memory problems similar to elderly ones (object recognition tests and the good old Morris water maze), while overexpressing it in the older animals brings their performance back to the younger levels. Overall, it's a pretty convincing piece of work.
It should set off a lot of study of the pathways the protein's involved in. My hope is that there's a small-molecule opportunity in there, but it's too early to say. Since it's involved with histone coding, it could well be that this protein has downstream effects on the expression of others that turn out to be crucial players (but whose absolute expression levels weren't changed enough to be picked up in the primary study). Trying to find out what RbAp48 is doing will keep everyone busy, as will the question of how (and/or why) it declines with age. Right now, I think the whole area is wide open.
It is good to hear, though, that age-related memory problems may not be inevitable, and may well be reversible. My own memory seems to be doing well - everyone who knows me well seems convinced that my brain is stuffed full of junk, which detritus gets dragged out into the sunlight with alarming frequency and speed. But, like anyone else, I do get stuck on odd bits of knowledge that I think I should be able to call up quickly, but can't. I wonder if I'm as quick as I was when I was on Jeopardy almost twenty years ago, for example?
(If you don't have access to the journal, here's the news writeup from Science, and here's Sharon Begley at Bloomberg).
+ TrackBacks (0) | Category: Aging and Lifespan | The Central Nervous System
August 12, 2013
The New York Times had a rather confusing story the other day about the PTEN gene, autism, and cancer. Unfortunately, it turned into a good example of how not to explain a subject like this, and it missed out (or waited too long) to explain a number of key concepts. Things like "one gene can be responsible a lot of different things in a human phenotype", and "genes can have a lot of different mutations, which can also do different things", and "autism's genetic signature is complex and not well worked out, not least because it's such a wide-ranging diagnosis", and (perhaps most importantly, "people with autism are not doomed to get cancer".
Let me refer you to Emily Willingham at Forbes, who does a fine job of straightening things out here. I fear that what can happen at the Times (and other media outlets as well) is that when a reporter scrambles a science piece, there's no one else on the staff who's capable of noticing it. So it just runs as is.
+ TrackBacks (0) | Category: Cancer | The Central Nervous System
July 25, 2013
Ben Cravatt is talking about this work on activity-based protein profiling of serine hydrolase enzymes. That's quite a class to work on - as he says, up to 2% of all the proteins in the body fall into this group, but only half of them have had even the most cursory bit of characterization. Even among the "known" ones, most of their activities are still dark, and only 10% of them have useful pharmacological tools.
He's detailed a compound (PF-3845) that Pfizer found as a screening hit for FAAH, which although it looked benign, turned out to be a covalent inhibitor due to a reactive arylurea. Pfizer, he says, backed off when this mechanism was uncovered - they weren't ready at the time for covalency, but he says that they've loosened up since then. Studying the compound in various tissues, including the brain, showed that it was extremely selective for FAAH.
Another reactive compound, JZL184, is an inhibitor of monoacylglycerol hydrolase (MAGL). Turns out that its carbamate group also reacts with FAAH, but there's a 300-fold window in the potency. The problem is, that's not enough. In mouse models, hitting both enzymes at the same time leads to behavioral problems. Changing the leaving group to a slightly less reactive (and nonaromatic) hexafluoroisopropanol, though, made the compound selective again. I found this quite interesting - most of the time, you'd think that 300x is plenty of room, but apparently not. That doesn't make things any easier, does it?
In response to a question (from me), he says that covalency is what makes this tricky. The half-life of the brain enzymes is some 12 to 14 hours, so by the time the next once-a-day dose comes in, there's still 20 or 30% of the enzyme still shut down, and things get out of hand pretty soon. For a covalent mechanism, he recommends 2000-fold or 5000-fold. On the other hand, he says that when they've had a serine hydrolase-targeted compound, they've never seen it react out of that class (targeting cysteine residues, though, is a very different story). And the covalent mechanism gives you some unique opportunities - for example, deliberate engineering a short half-life, because that might be all you need.
+ TrackBacks (0) | Category: Chemical Biology | The Central Nervous System
May 9, 2013
Want to be weirded out? Study the central nervous system. I started off my med-chem career in CNS drug discovery, and it's still my standard for impenetrability. There's a new paper in Science, though, that just makes you roll your eyes and look up at the ceiling.
The variety of neurotransmitters is well appreciated - you have all these different and overlapping signaling systems using acetylcholine, dopamine, serotonin, and a host of lesser-known molecules, including such oddities as hydrogen sulfide and even carbon monoxide. And on the receiving end, the various subtypes of receptors are well studied, and those give a tremendous boost to the variety of signaling from a single neurotransmitter type. Any given neuron can have several of these going on at the same time - when you consider how many different axons can be sprawled out from a single cell, there's a lot of room for variety.
That, you might think, is a pretty fair amount of complexity. But note also that the density and population of these receptors can change according to environmental stimuli. That's why you get headaches if you don't have your accustomed coffee in the morning (you've made more adenosine A2 receptors, and you haven't put any fresh caffeine ligand into them). Then there are receptor dimers (homo- and hetero-) that act differently than the single varieties, constituitively active receptors that are always on, until a ligand turns them off (the opposite of the classic signaling mechanism), and so on. Now, surely, we're up to a suitable level of complex function.
Har har, says biology. This latest paper shows, by a series of experiment in rats, that a given population of neurons can completely switch the receptor system it uses in response to environmental cues:
Our results demonstrate transmitter switching between dopamine and somatostatin in neurons in the adult rat brain, induced by exposure to short- and long-day photoperiods that mimic seasonal changes at high latitudes. The shifts in SST/dopamine expression are regulated at the transcriptional level, are matched by parallel changes in postsynaptic D2R/SST2/4R expression, and have pronounced effects on behavior. SST-IR/TH-IR local interneurons synapse on CRF-releasing cells, providing a mechanism by which the brain of nocturnal rats generates a stress response to a long-day photoperiod, contributing to depression and serving as functional integrators at the interface of sensory and neuroendocrine responses.
This remains to be demonstrated in human tissue, but I see absolutely no reason what the same sort of thing shouldn't be happening in our heads as well. There may well be a whole constellation of these neurotransmitter switchovers that can take place in response to various cues, but which neurons can do this, involving which signaling regimes, and in response to what stimuli - those are all open questions. And what the couplings are between the environmental response and all the changes in transcription that need to take place for this to happen, those are going to have to be worked out, too.
There may well be drug targets in there. Actually, there are drug targets everywhere. We just don't know what most of them are yet.
+ TrackBacks (0) | Category: The Central Nervous System
April 2, 2013
Let us take up the case of Tecfidera, the new Biogen/Idec drug for multiple sclerosis, known to us chemists as dimethyl fumarate. It joins the (not very long) list of industrial chemicals (the kind that can be purchased in railroad-car sizes) that are also approved pharmaceuticals for human use. The MS area has seen this before, interestingly.
A year's supply of Tecfidera will set you (or your insurance company) back $54,900. That's a bit higher than many analysts were anticipating, but that means "a bit higher over $50,000". The ceiling is about $60,000, which is what Novartis's Gilenya (fingolomod) goes for, and Biogen wanted to undercut them a bit. So, 55 long ones for a year's worth of dimethyl fumarate pills - what should one think about that?
Several thoughts come to mind, the first one being (probably) "Fifty thousand dollars for a bunch of dimethyl fumarate? Who's going to stand for that?" But we have an estimate for the second part of that question - Biogen thinks that quite a few people are going to stand for it, rather than stand for fingolomod. I'm sure they've devoted quite a bit of time and effort into thinking about that price, and that it's their best estimate of maximum profit. How, exactly, do they get away with that? Simple. They get away with it because they were willing to take the compound through clinical trials in MS patients, find out if it's tolerated and if it's efficacious, figure out the dosing regimen, and get it approved for this use by the FDA. If you or I had been willing to do that, and had been able to round up the money and resources, then we would also have the ability to charge fifty grand a year for it (or whatever we thought fit, actually).
What, exactly, gave them the idea that dimethyl fumarate might be good for multiple sclerosis? As it turns out, a German physician described its topical use for psoriasis back in 1959, and a formation of the compound as a cream (along with some monoesters) was eventually studied clinically by a small company in Switzerland called Fumapharm. This went on the market in Germany in the early 1990s, but the company did not have either the willingness or desire to extend their idea outside that region. But since dimethyl fumarate appears to work on psoriasis by modulating the immune system somehow, it did occur to someone that it might also be worth looking at in multiple sclerosis. Biogen began developing dimethyl fumarate for that purpose with Fumapharm, and eventually bought them outright in 2006 as things began to look more promising.
In other words, the connection of dimethyl fumarate as a possible therapy for MS had been out there, waiting to be made, since before many of us were born. Generations of drug developers had their chances to see it. Every company in the business had a chance to get interested in Fumapharm back in the late 80s and early 90s. But Biogen did, and in 2013 that move has paid off.
Now we come to two more questions, the first of which is "Should that move be paying off quite so lucratively?" But who gets to decide? Watching people pay fifty grand for a year's supply of dimethyl fumarate is not, on the face of it, a very appealing sight. At least, I don't find it so. But on the other hand, cost-of-goods is (for small molecules) generally not a very large part of the expense of a given pill - a rule of thumb is that such expenses should certainly be below 5% of a drug's selling price, and preferably less than 2%. It's just that it's even less in this case, and Biogen also has fewer worries about their supply chain, presumably. The fact this this drug is dimethyl fumarate is a curiosity (and perhaps an irritating one), but that lowers Biogen's costs by a couple of thousand a year per patient compared to some other small molecule. The rest of the cost of Tecfidera has nothing to do with what the ingredients are - it's all about what Biogen had to pay to get it on the market, and (most importantly) what the market will bear. If insurance companies believe that paying fifty thousand a year for the drug is a worthwhile expense, the Biogen will agree with them, too.
The second question is divorced from words like "should", and moves to the practical question of "can". The topical fumarate drug in Europe apparently had fairly wide "homebrew" use among psoriasis patients in other countries, and one has to wonder just a bit about that happening with Tacfidera. Biogen Idec certainly has method-of-use patents, but not composition-of-matter, so it's going to be up to them to try to police this. I found the Makena situation more irritating than this one (and the colchicine one, too), because in those cases, the exact drugs for the exact indications had already been on the market. (Dimethyl fumarate was not a drug for MS until Biogen proved it so, by contrast). But KV Pharmaceuticals had to go after people who were compounding the drug, anyway, and I have to wonder if a secondary market in dimethyl fumarate might develop. I don't know the details of its formulation (and I'm sure that Biogen will make much of it being something that can't be replicated in a basement), but there will surely be people who try it.
+ TrackBacks (0) | Category: Drug Development | Drug Prices | The Central Nervous System
March 21, 2013
If you looked at the timelines of a clinical trial, you'll notice that there's often a surprisingly long gap between when the trial actually ends and when the results of it are ready to announce. If you've ever been involved in working up all that data, you'll know why, but it's usually not obvious to people outside of medical research why it should take so long. (I know how they'd handle the scene in a movie, were any film to ever take on such a subject - it would look like the Oscars, with someone saying "And the winner is. . ." within the first few seconds after the last patient was worked up).
The Danish company NeuroSearch unfortunately provided everyone with a lesson in why you want to go over your trial data carefully. In February of 2010, they announced positive results in a Phase III trial of a drug (pridopidine, Huntexil) for Huntington's (a rare event, that), but two months later they had to take it back. This move cratered their stock price, and investor confidence in general, as you'd imagine. Further analysis, which I would guess involved someone sitting in front of a computer screen, tapping keys and slowly turning pale and sweaty, showed that the drug actually hadn't reached statistical significance after all.
It came down to the varying genetic background in the patients being studied, specifically, the number of CAG repeats. That's the mutation behind Huntington's - once you get up to too many of those trinucleotide repeats in the middle of the gene sequence, the resulting protein starts to behave abnormally. Fewer than 36 CAGs, and you should be fine, but a good part of the severity of the disease has to do with how many repeats past that a person might have. NeuroSearch's trial design was not predicated on such genetic differences, at least not for modeling the primary endpoints. If you took those into account, they reached statistical significance, but if you didn't, you missed.
That's unfortunate, but could (in theory) be worse - after all, their efficacy did seem to track with a clinically relevant measure of disease severity. But you'll have noticed that I'm wording all these sentences in the past tense. The company has announced that they're closing. It's all been downhill since that first grim announcement. In early 2011, the FDA rejected their New Drug Application, saying that the company needed to provide more data. By September of that year, they were laying off most of their employees to try to get the resources together for another Phase III trial. In 2012, the company began shopping Huntexil around, as it became clear that they were not going to be able to develop it themselves, and last September, Teva purchased the program.
This is a rough one, because for a few weeks there in 2010, NeuroSearch looked like they had made it. If you want to see the fulcrum, the place about which whole companies pivot, go to clinical trial design. It's hard to overstate just how important it is.
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
January 18, 2013
Here's another one to file under "What we don't know about brain chemistry". That's a roomy category for sure, which (to be optimistic about it) leaves a lot of room for discovery. In that category are the observations that ketamine seems to dramatically help some people with major depression. It's an old drug, of course, still used in some situations as an anesthetic, and also used (or abused) by people who wish to deliberately derange themselves in dance clubs. Chemists will note the chemical resemblance to phencyclidine (PCP), a compound whose reputation for causing derangement is thouroughly deserved. (Ketamine was, in fact, a "second-generation" version of PCP, many years on).
Both of these compounds are, among other things, NMDA receptor antagonists. That had not been considered a high-priority target for treating depression, but you certainly can't argue with results (not, at least, when you know as little about the mechanisms of depression as we do). There are better compounds around, fortunately:
AZD6765, an inhibitor of the N-methyl-D-aspartate (NMDA) receptor, a glutamate signaling protein involved in cellular mechanisms for learning and memory, was originally developed as a treatment for stroke. It was shelved in 2000 by the drug's manufacturer, AstraZeneca, after phase 2 trials failed to show signs of efficacy. In the decade that followed, however, small clinical reports started to emerge showing that ketamine, an analgesic that also blocks the NMDA receptor, produced rapid responses in people who didn't benefit from any other antidepressants. And unlike most therapies for major depression, which usually take weeks to kick in, ketamine's mood-lifting effects could be seen within two hours, with a therapeutic boost that often lasted for weeks following a single infusion. Ketamine treatment also came with a number of debilitating side effects, though, including psychosis and detachment from reality. Fortunately for AstraZeneca, the company had a cleaner drug on its shelves that could harness ketamine's benefits with fewer problems.
Note that AZD6765 (lanicemine) has a rather simple structure, further confirmation (if anyone needed any) that things this size can be very effective drugs. Here's the clinical study that Nature Medicine news item refers to, and it makes clear that this was a pretty tough patient cohort:
This double-blind, placebo-controlled, proof-of-concept study found that a single intravenous infusion of a low-trapping nonselective NMDA channel blocker in patients with treatment-resistant MDD rapidly (within minutes) improved depressive symptoms without inducing psychotomimetic effects. However, this improvement was transitory. To our knowledge, this is the first report showing rapid antidepressant effects associated with a single infusion of a low-trapping nonselective NMDA channel blocker that did not induce psychotomimetic side effects in patients with treatment-resistant MDD.
More specifically, patient depression scores improved significantly more in patients receiving AZD6765 than in those receiving placebo, and this improvement occurred as early as 80 min. This difference was statistically significant for the MADRS, HDRS, BDI, and HAM-A. These findings are particularly noteworthy, because a large proportion of study participants had a substantial history of past treatment that was not efficacious. The mean number of past antidepressant trials was seven, and 45% of participants had failed to respond to electroconvulsive therapy.
The problem is the short duration. By one evaluation scale, the effects only lasted about two hours (by another less stringent test, some small effect could still be seen out to one or two days). Ketamine lasts longer, albeit at a cost of some severe side effects. This doesn't seem to be a problem with high clearance of AZD6765 (its PK had been well worked out when it was a candidate for stroke). Other factors might be operating:
These differences could be due to subunit selectivity and trapping blockade. It is also possible that the metabolites of ketamine might be involved in its relatively sustained antidepressant effects, perhaps acting on off-site targets; a recent report described active ketamine metabolites that last for up to 3 days. It is also important to note that, although trapping blockade or broadness of antagonist effects on the NMDA subunit receptors might be key to the robustness of antidepressant effects, these same properties might be involved in the dissociative and perceptual side effects of ketamine. Notably, these side effects were not apparent at the dose of AZD6765 tested.
If that last part is accurate, this is going to be a tricky target to work with. I doubt if AZD6765 itself has a future as an antidepressant, but if it can help to understand that mode of action, what the downstream effects might be, and which ones are important, it could lead to something very valuable indeed. The time and effort that will be needed for that is food for thought, particularly when you consider the patients in this study. What must it be like to feel the poison cloud of major depression lift briefly, only to descend again? The Nature Medicine piece has this testimony:
(David) Prietz, 48, a scheduling supervisor at a sheet-metal manufacturer in Rochester, New York, who has been on disability leave for several years, started to feel his head clear from the fog of depression within days of receiving AZD6765. After his second infusion, he vividly began noticing the fall foliage of the trees outside his doctor's office—something he hadn't previously appreciated in his depressed state. “The greens seemed a lot greener and the blue sky seemed a lot bluer,” he says. Although the lift lasted only a couple months after the three-week trial finished and the drug was taken away, the experience gave Prietz hope that he might one day get better. “I can't recall feeling as well I did at the time,” he says.
Fall foliage for Algernon? I hope we can do something for these people, because as it is, a short-duration effect is scientifically fascinating but emotionally cruel.
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
January 10, 2013
There's a paper out in Nature with the provocative title of "Automated Design of Ligands to Polypharmcological Profiles". Admittedly, to someone outside my own field of medicinal chemistry, that probably sounds about as dry as the Atacama desert, but it got my attention.
It's a large multi-center contribution, but the principal authors are Andrew Hopkins at Dundee and Bryan Roth at UNC-Chapel Hill. Using James Black's principle that the best place to find a new drug is to start with an old drug, what they're doing here is taking known ligands and running through a machine-learning process to see if they can introduce new activities into them. Now, those of us who spend time trying to take out other activities might wonder what good this is, but there are a some good reasons: for one thing, many CNS agents are polypharmacological to start with. And there certainly are situations where you want dual-acting compounds, CNS or not, which can be a major challenge. And read on - you can run things to get selectivity, too.
So how well does their technique work? The example they give starts with the cholinesterase inhibitor donepezil (sold as Aricept), which has a perfectly reasonable med-chem look to its structure. The groups' prediction, using their current models, was the it had a reasonable chance of having D4 dopaminergic activity, but probably not D2 (which numbers were borne out by experiment, and might have something to do with whatever activity Aricept has for Alzheimer's). I'll let them describe the process:
We tested our method by evolving the structure of donepezil with the dual objectives of improving D2 activity and achieving blood–brain barrier penetration. In our approach the desired multi-objective profile is defined a priori and then expressed as a point in multi-dimensional space termed ‘the ideal achievement point’. In this first example the objectives were simply defined as two target properties and therefore the space has two dimensions. Each dimension is defined by a Bayesian score for the predicted activity and a combined score that describes the absorption, distribution, metabolism and excretion (ADME) properties suitable for blood–brain barrier penetration (D2 score = 100, ADME score = 50). We then generated alternative chemical structures by a set of structural transformations using donepezil as the starting structure. The population was subsequently enumerated by applying a set of transformations to the parent compound(s) of each generation. In contrast to rules-based or synthetic-reaction-based approaches for generating chemical structures, we used a knowledge-based approach by mining the medicinal chemistry literature. By deriving structural transformations from medicinal chemistry, we attempted to mimic the creative design process.
Hmm. They rank these compounds in multi-dimensional space, according to distance from the ideal end point, filter them for chemical novelty, Lipinski criteria, etc., and then use the best structures as starting points for another round. This continues until you reach close enough to the desired point, or until you dead-end on improvement. In this case, they ended up with fairly active D2 compounds, by going to a lactam in the five-membered ring, lengthening the chain a bit, and going to an arylpiperazine on the end. They also predicted, though, that these compounds would hit a number of other targets, which they indeed did on testing.
How about something a bit more. . .targeted? They tried taking these new compounds through another design loop, this time trying to get rid of all the alpha-adrenergic activity they'd picked up, while maintaining the 5-HT1A and dopamine receptor activity they now had. They tried it both ways - running the algorithms with filtration of the alpha-active compounds at each stage, and without. Interestingly, both optimizations came up with very similar compounds, differing only out on the arylpiperazine end. The alpha-active series wanted ortho-methoxyphenyl on the piperazine, while the alpha-inactive series wanted 2-pyridyl. These preferences were confirmed by experiment as well. Some of you who've worked on adrenergics might be saying "Well, yeah, that's what the receptors are already known to prefer, so what's the news here?" But keep in mind, what the receptors are known to prefer is what's been programmed into this process, so of course, that's what it's going to recapitulate. The idea is for the program to keep track of all the known activities - the huge potential SAR spreadsheet - so you don't have to try to do it yourself, with you own grey matter.
The last example asks whether, starting from donezepil, potent and selective D4 compounds could be evolved. I'm going to reproduce the figure from the paper here, to give an idea of the synthetic transformations involved:
So, donezepil (compound 1) is 614 nM against D4, and after a few rounds of optimization, you get structure 13, which is 9 nM. Not bad! Then if you take 13 as a starting point, and select for structural novelty along the way, you get 18 (five micromolar against D4), 20, 21, and (S)-27 (which is 90 nM at D4). All of these compounds picked up a great deal more selectivity for D4 compared to the earlier donezepil-derived scaffolds as well.
Well, then, are we all out of what jobs we have left? Not just yet. You'll note that the group picked GPCRs as a field to work in, partly because there's a tremendous amount known about their SAR preferences and cross-functional selectivities. And even so, of the 800 predictions made in the course of this work, the authors claim about a 75% success rate - pretty impressive, but not the All-Seeing Eye, quite yet. I'd be quite interested in seeing these algorithms tried out on kinase inhibitors, another area with a wealth of such data. But if you're dwelling among the untrodden ways, like Wordsworth's Lucy, then you're pretty much on your own, I'd say, unless you 're looking to add in some activity in one of the more well-worked-out classes.
But knowledge piles up, doesn't it? This approach is the sort of thing that will not be going away, and should be getting more powerful and useful as time goes on. I have no trouble picturing an eventual future where such algorithms do a lot of the grunt work of drug discovery, but I don't foresee that happened for a while yet. Unless, of course, you do GPCR ligand drug discovery. In that case, I'd be contacting the authors of this paper as soon as possible, because this looks like something you need to be aware of.
+ TrackBacks (0) | Category: Drug Assays | In Silico | The Central Nervous System
December 20, 2012
Tiny Allon Therapeutics had an ambitious plan to go after progressive supranuclear palsy, a kind of progressive brain deterioration, and thence (they hoped) to other neurodegenerative disorders. The lead compound was davunetide, an oligopeptide derived from activity-dependent neuroprotective protein, ADNP.
It was a reasonable idea, but neurodegeneration is not a reasonable area. The drug has now completely wiped out in the clinic, failing both primary endpoints in its pivotal trial. This is one example of the sort of research that most people don't ever hear about, from a small company that most people will never have heard of at all. But this is the background activity of drug research (with an all-too-common outcome), and if more people were aware of it, perhaps that would be a good thing (see today's other post).
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
November 23, 2012
I wanted to mention that the crowdfunded CNS research that I mentioned here is now in its final 48 hours for donations. Money seems to be picking up, but it'll be close to see if they can make their target. If you're interested, donations can be made here.
+ TrackBacks (0) | Category: The Central Nervous System
October 18, 2012
One of the questions I get asked most often, by people outside of the drug industry, is whether generic medications really are the same as the original branded ones. My answer has always been the same: that yes, they are. And that's still my answer, but I'll have to modify it a bit, because we're seeing an exception right now. Update: more exceptions are showing up in the comments section.
Unfortunately, "right now" turns out, in this case, to mean "over the last five years". The problem here is bupropion (brand name Wellbutrin), the well-known antidepressant. A generic version of it came on the market in 2006, and it went through the usual FDA review. For generic drugs, the big question is bioequivalence: do they deliver the same ingredient in the same way as the originally approved drug and formulation? The agency requires generic drug applications to show proof of this for their own version.
For bupropion/Wellbutrin, the case is complicated by the two approved doses, 150mg and 300mg. The higher dose is associated with a risk of seizures, which made the FDA grant a waiver for its testing - they extrapolated from the 150mg data instead. And right about here is where the red flags began to go up. The agency began to receive reports, almost immediately, of trouble with the 300mg generic dose. In many cases, these problems (lack of efficacy and/or increased side effects) resolved when patients switched back to the original branded formulation. That link also shows the pharamacokinetic data comparing the two 150mg dosages (branded and generic), which turned out to have some differences, mostly in the time it took to reach the maximum concentration (the generic came on a bit faster).
At the time, though, as that link shows, the FDA decided that because of the complicated clinical course of depression (and antidepressant therapy) that they couldn't blame the reported problems on a difference between the two 300mg products. A large number of patients were taking each one, and the number of problems reported could have been explained by the usual variations:
The FDA considers the generic form of bupropion XL 300 mg (Teva Pharmaceuticals) bioequivalent and therapeutically equivalent to (interchangeable with) Wellbutrin XL 300 mg. Although there are small differences in the pharmacokinetic profiles of these two formulations, they are not outside the established boundaries for equivalence nor are they different from other bupropion products known to be effective. The recurrent nature of (major depression) offers a scientifically reasonable explanation for the reports of lack of efficacy following a switch to a generic product. The adverse effects (e.g., headache, GI disorder, fatigue and anxiety) reported following a switch were relatively few in number and typical of adverse drug events reported in drug and placebo groups in most clinical trials. . .
But they seem to have changed their minds about this. It appears that reports continued to come in, and were associated most frequently with the generic version marketed by Teva (and produced by Impax Pharmaceuticals). That FDA page I've quoted above is not dated, but appears to come from late 2007 or so. As it turns out, the agency was at that time asking Teva to conduct that missing bioequivalence study with their 300mg product. See Q12 on this page:
FDA continued to review postmarketing reports throughout 2007. In November 2007, taking into consideration reports of lack of efficacy, FDA requested that Impax/Teva conduct a bioequivalence study directly comparing Budeprion XL 300 mg to Wellbutrin XL 300 mg. The study protocol stipulated the enrollment of patients who reported problems after switching from Wellbutrin XL 300 mg to Budeprion XL 300 mg. Impax/Teva began the study, but terminated it in late 2011, reporting that despite efforts to enroll patients, Impax/Teva was unable to recruit a significant number of affected patients.
The agency apparently was continuing to receive reports of problems, because they ended up deciding to run their own study, which is an uncommon move. This got underway before Teva officially gave up on their study, which gives one the impression that the FDA did not expect anything useful from them by that point:
In 2010, because of the public health interest in obtaining bioequivalence data, FDA decided to sponsor a bioequivalence study comparing Budeprion XL 300 mg to Wellbutrin XL 300 mg. The FDA-sponsored study enrolled 24 healthy adult volunteers and examined the rate and extent of absorption of the two drug products under fasting conditions. In that study, the results of which became available in August 2012, Budeprion XL 300 mg failed to demonstrate bioequivalence to Wellbutrin XL 300 mg.
That FDA-sponsored study is what led to the recent decision to pull the Imapax/Teva 300mg product from the market. Their 150mg dosage is still approved, and doesn't seem to have been associated with any increased reports of trouble (despite the small-but-real PK differences noted above). And it's also worth noting that there are four other generic 300mg bupropion/Wellbutrin products out there, which do not seem to have caused problems.
How big a difference are we talking about here? There are several measurements that are used for measuring blood levels of a drug. You have Cmax, the maximum concentration that is seen at a given dosage, and there's also Tmax, the time at which that maximum concentration occurs. And if you plot blood levels versus time, you also get AUC (area under the curve), which is a measure of the total exposure that a given dose provides. There are a lot of ways these measurements can play out: a very quickly absorbed drug will have an early Tmax and a large Cmax, for example, but that concentration might come back down quickly, too, which could lead to a lower AUC than a formulation of the same drug (at the same nominal dose) that came on more slowly and spread out over a longer time period. To add to the fun, some drugs have efficacy that's more driven by how high their Cmax values can get, while others are more driven by how large the AUCs are. And in the case of bupropion/Wellbutrin, there's an additional complication: some of the drug's efficacy is due to a metabolite, a further compound produced in the liver after dosing, and such metabolites have their own PK profiles, too.
So in this case, it turns out that the AUC just missed on the low side. The FDA wants the statistical 90% confidence interval to fall between 80 and 125% compared to the original drug, and in this case the 90% CI was 77-96%. The Cmax was definitely lower, too - 90% CI was 65-87% of the branded product. And while the agency doesn't provide numbers for the metabolite, they also state that it missed meeting the standards as well. There are drugs, it should be said, that would still be effective at these levels, but Wellbutrin clearly isn't one of them.
My own take is that the FDA was willing to consider the adverse reports as just the usual noisy clinical situation with an antidepressant until the other generics were approved, at which point it became clear that the problems were clustering around the Impax/Teva product. Here's how the FDA addresses the "Why didn't we find out about this earlier?" question:
Q17. In retrospect, were FDA’s decisions regarding the approval and ongoing monitoring of Budeprion XL 300 mg appropriate?
A17. A less cautious approach in studying the bioequivalence of Budeprion XL 300 mg could have brought the data to light earlier. The FDA-sponsored study was completed only weeks ago, which is a very short time for data from a clinical experiment to be announced to the public.
Bupropion is associated with a risk for seizures, which was the basis of the Agency's cautious approach with regard to the early Budeprion XL bioequivalence studies, in which data were extrapolated from Budeprion XL 150 mg in patients to the projected consequences of exposure to Budeprion 300 mg. In retrospect, it is clear that this extrapolation did not provide the right conclusion regarding bioequivalence of Budeprion XL 300 mg. FDA also has much more knowledge today of the seizure-associated risk of bupropion-containing drugs. The trial design of the sponsor-initiated study of 2007 could have been successful, had it been replaced by the trial design employed in the recent FDA-sponsored study.
Of course, the trial design in the sponsor-initiated study of 2007 was that requested by the FDA. But Teva, for their part, does not appear to have been a ball of fire in getting that study recruited and completed, either. It's quite possible, though, that they couldn't round up enough patients who'd had trouble with the generic switch and were also willing to go back and experience that again in the cause of science. Overall, I think that the FDA is more on the hook here for letting things go on as long as they did, but there's plenty of blame to go around.
Still, I find this post at Forbes to be full of unnecessary hyperventilation. You wouldn't know, from reading it, that the FDA initially waived the requirement for 300mg testing in this case because of the risk of seizures. There's a line in there about how the agency is making patients their guinea pigs by not testing at the higher dose, but you could have scored the same debating points after a 300mg study that harmed its patients, which is what it looked at the time would happen. You also wouldn't know that the other generic 300mg formulations don't seem to have been associated with increased adverse-event reports, either.
And that post makes much of the way that these bioequivalence tests are left up the manufacturers. That they are: but if you want to change that, you're going to have to (1) fund the FDA at a much higher level, and (2) wait longer for generic switches to occur. The generic manufacturers will run these tests at the absolute first possible moment, since they want to get onto the market. The FDA will run them when they get around to it; they don't have the same incentives at all. Their incentives, in fact, oscillate between "Don't approve - there might be trouble" and "Definitely approve - we might be missing out on benefit". The winds of fortune blow the line between those two around all the time.
In this case, I think the FDA should have exercised its court-of-last-resort function earlier and more forcefully. But that's easy for me to say, sitting where I am. I don't have to see the mass of noisy adverse event reports coming in over the transom day after day. If the agency acted immediately and forcefully on every one, we'd have no drugs on the market at all. There's a middle ground, but boy, is it hard to find.
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs | The Central Nervous System
October 8, 2012
It hasn't been good over at Targacept. They had a big antidepressant failure a while back, and last month ended development of an ADHD drug, the nicotinic acetylcholine receptor ligand TC-5619.
They cut back staff back in the spring, and the CEO departed. Now the expected has happened: the company has apparently laid off everyone in research, and is conserving what cash it has to try to get something to the deal-making point. A sad, but familiar story in this business. . .sometimes companies come back after this point, and sometimes the event horizon turns out to have been passed.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System
October 5, 2012
Ethan Perlstein at Princeton is the main author of this research on sertraline that I blogged about earlier this year. Now he's looking to crowdfund his next research project, on the neuronal effects of amphetamines. He's trying to raise $25,000 to do radiolabeling and electron microscopy studies, which would make this the largest crowdfunding experiment in the sciences so far (but still, I might add, small change compared to the sorts of grants that much of academia spends its time trying to line up).
What he's looking at is 2 to 3 months of work for one MS-level scientist. In this post he describes some of the reactions he's had to the idea so far, and lists the benefits that donors will receive, according to the amounts they contribute. That list is a real eye-opener, let me tell you - it's a different world we're entering, or trying to enter, at any rate. For example: "$100 or higher – You’ll get a hearty thanks in person, and the opportunity to talk science over a round of beer or glass of wine at a NYC watering hole one night after work, or when you visit NYC within the next 6 months." Or how about this one: "$1,000 or higher – Attend up to 2 lab meetings during the project and 1 publication brainstorming session at the end of the project. You will also receive access to a Google Doc during the manuscript writing stage. Supporters who contribute substantially to the final manuscript may receive co-authorship."
Needless to say, I'm going to watch this with great interest. The projects that can be funded at this level (with some expectation of producing something useful) are, perhaps, special cases, but it's the principle of the thing that intrigues me the most. That's why I'm also putting this one in the "Business and Markets" category, because asking for donations like this is a pure market activity. As a person with a pronounced free-market bias, I'm very much wondering how this will all play out. Thoughts?
Update: Wavefunction has a post on this here.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System
September 20, 2012
Swamped with all sorts of stuff today - when science marches on, you have to make sure that it's not leaving its bootprints on your back. But I do have some interesting links:
The bluest of blue-sky brain research, funded by Paul Allen. Fascinating stuff, on several levels - here's a big publication that came out this week. I find the phenomenon of tech-billionaire funding for things like this, asteroid mining, low-cost orbital access and the like very encouraging. (And of course, the Gates Foundation is doing a lot in more earthbound pursuits).
The Wall Street Journal reveals what is apparently a rather ill-kept secret: most firms funded by venture capital fail. "Most", as in about 3 out of 4. That's a loose definition, though - as the article says, if you're talking total wipeout of capital, then that's about one third of them. If you're talking about failing to see the projected return in the projected time, well, that's over 90%. But it's all about the ones that succeed, just like the drug business.
The Royal Society of Chemistry, in a rather self-congratulatory press release, pledges money to help authors publish their work open-access in RSC journals. The UK government is putting money into this, but no one's sure if it'll be enough.
Do you want to make this compound? No? Neither do I. Especially not when they turn around and stick three more nitro groups onto it.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System | The Scientific Literature
August 31, 2012
Eli Lilly has been getting shelled with bad news recently. There was the not-that-encouraging-at-all failure of its Alzheimer's antibody solanezumab to meet any of its clinical endpoints. But that's the good news, since that (at least according to the company) it showed some signs of something in some patients.
We can't say that about pomaglumetad methionil (LY2140023), their metabotropic glutamate receptor ligand for schizophrenia, which is being halted. The first large trial of the compound failed to meet its endpoint, and an interim analysis showed that the drug was unlikely to have a chance of making its endpoints in the second trial. It will now disappear, as will the money spent on it so far. (The first drug project I ever worked on was a backup for an antipsychotic with a novel mechanism, which also failed to do a damned thing in the clinic, and which experience perhaps gave me some of the ideas I have now about drug research).
This compound is an oral prodrug of LY404039, which has a rather unusual structure. The New York Times did a story about the drug's development a few years ago, which honestly makes rather sad reading in light of the current news. It was once thought to have great promise. Note the cynical statement in that last link about how it really doesn't matter if the compound works or not - but you know what? It did matter in the end. This was the first compound of its type, an attempt at a real innovation through a new mechanism to treat mental illness, just the sort of thing that some people will tell you that the drug industry never gets around to doing.
And just to round things off, Lilly announced the results of a head-to-head trial of its anticoagulant drug Effient versus (now generic) Plavix in acute coronary syndrome. This is the sort of trial that critics of the drug industry keep saying never gets run, by the way. But this one was, because Plavix is the thing to beat in that field - and Effient didn't beat it, although there might have been an edge in long-term followup.
Anticoagulants are a tough field - there are a lot of patients, a lot of money to be made, and a lot of room (in theory) for improvement over the existing agents. But just beating heparin is hard enough, without the additional challenge of beating cheap Plavix. It's a large enough patient population, though, that more than one drug is needed because of different responses.
There have been a lot of critics of Lilly's research strategy over the years, and a lot of shareholders have been (and are) yelling for the CEO's head. But from where I sit, it looks like the company has been taking a lot of good shots. They've had a big push in Alzheimer's, for example. Their gamma-secretase inhibitor, which failed in terrible fashion, was a first of its kind. Someone had to be the first to try this mechanism out; it's been a goal of Alzheimer's research for over twenty years now. Solanezumab was a tougher call, given the difficulties that Elan (and Wyeth/Pfizer, J&J, and so on) have had with that approach over the years. But immunology is a black box, different antibodies do different things in different people, and Lilly's not the only company trying the same thing. And they've been doggedly pursuing beta-secretase as well. These, like them or not, are still some of the best ideas that anyone has for Alzheimer's therapy. And any kind of win in that area would be a huge event - I think that Lilly deserves credit for having the nerve to go after such a tough area, because I can tell you that I've been avoiding it ever since I worked on it in the 1990s.
But what would I have spent the money on instead? It's not like there are any low-risk ideas crowding each other for attention. Lilly's portfolio is not a crazy or stupid one - it's not all wild ideas, but it's not all full of attempts to play it safe, either. It looks like the sort of thing any big (and highly competent) drug research organization could have ended up with. The odds are still very much against any drug making it through the clinic, which means that having three (or four, or five) in a row go bad on you is not an unusual event at all. Just a horribly unprofitable one.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System
August 14, 2012
I wrote here about Ampyra, the multiple sclerosis drug from Acorda Therapeutics, one that came close to the record for "simplest chemical matter in a marketed drug". (As it happens, Biogen Idec is making sure that it doesn't even have the title of "simplest drug for multiple sclerosis", and the shadow of valproic acid looms over this entire competition).
That post mentioned some doubts that had been expressed about how effective Ampyra is for its target: improving gait in MS patients. And now those doubts are increasing, because the company has been asked to conduct a trial of a lower 5 mg dose of the drug along with the approved 10 mg one (which was associated with seizures in some patients). And neither one of them met the primary endpoint. As that link shows, the company has several explanations - different endpoint than used before, higher placebo response than usual, wider variety of patients - but those are all ex post facto. Acorda wouldn't have set up the trial like this in the first place if they didn't think that the approved dose would work, and it didn't.
For a drug with a rather narrow symptomatic indication, that's not good news. And it comes as Acorda is still trying to get the compound approved in Europe. The cost/benefit ratio usually can't stand a big hit to the "benefit" term.
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs | The Central Nervous System
August 9, 2012
The British Medical Journal says that the "widely touted innovation crisis in pharmaceuticals is a myth". The British Medical Journal is wrong.
There, that's about as direct as I can make it. But allow me to go into more detail, because that's not the the only thing they're wrong about. This is a new article entitled "Pharmaceutical research and development: what do we get for all that money?", and it's by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who's publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.
I'm generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they're in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I'll just note that it's hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.
Light and Lexchin's article makes much of Bernard Munos' work (which we talked about here), which shows a relatively constant rate of new drug discovery. They should go back and look at his graph, because they might notice that the slope of the line in recent years has not kept up with the historical rate. And they completely leave out one of the other key points that Munos makes: that even if the rate of discovery were to have remained linear, the costs associated with it sure as hell haven't. No, it's all a conspiracy:
"Meanwhile, telling "innovation crisis" stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition."
Ah, that must be why the industry has laid off thousands and thousands of people over the last few years: it's all a ploy to gain sympathy. We tell everyone else how hard it is to discover drugs, but when we're sure that there are no reporters or politicians around, we high-five each other at how successful our deception has been. Because that's our secret, according to Light and Lexchin. It's apparently not any harder to find something new and worthwhile, but we'd rather just sit on our rears and crank out "me-too" medications for the big bucks:
"This is the real innovation crisis: pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures. Although a steady stream of significantly superior drugs enlarges the medicine chest from which millions benefit, medicines have also produced an epidemic of serious adverse reactions that have added to national healthcare costs".
So let me get this straight: according to these folks, we mostly just make "minor variations", but the few really new drugs that come out aren't so great either, because of their "epidemic" of serious side effects. Let me advance an alternate set of explanations, one that I call, for lack of a better word, "reality". For one thing, "me-too" drugs are not identical, and their benefits are often overlooked by people who do not understand medicine. There are overcrowded therapeutic areas, but they're not common. The reason that some new drugs make only small advances on existing therapies is not because we like it that way, and it's especially not because we planned it that way. This happens because we try to make big advances, and we fail. Then we take what we can get.
No therapeutic area illustrates this better than oncology. Every new target in that field has come in with high hopes that this time we'll have something that really does the job. Angiogenesis inhibitors. Kinase inhibitors. Cell cycle disruptors. Microtubules, proteosomes, apoptosis, DNA repair, metabolic disruption of the Warburg effect. It goes on and on and on, and you know what? None of them work as well as we want them to. We take them into the clinic, give them to terrified people who have little hope left, and we watch as we provide with them, what? A few months of extra life? Was that what we were shooting for all along, do we grin and shake each others' hands when the results come in? "Another incremental advance! Rock and roll!"
Of course not. We're disappointed, and we're pissed off. But we don't know enough about cancer (yet) to do better, and cancer turns out to be a very hard condition to treat. It should also be noted that the financial incentives are there to discover something that really does pull people back from the edge of the grave, so you'd think that we money-grubbing, public-deceiving, expense-padding mercenaries might be attracted by that prospect. Apparently not.
The same goes for Alzheimer's disease. Just how much money has the industry spent over the last quarter of a century on Alzheimer's? I worked on it twenty years ago, and God knows that never came to anything. Look at the steady march, march, march of failure in the clinic - and keep in mind that these failures tend to come late in the game, during Phase III, and if you suggest to anyone in the business that you can run an Alzheimer's Phase III program and bring the whole thing in for $43 million dollars, you'll be invited to stop wasting everyone's time. Bapineuzumab's trials have surely cost several times that, and Pfizer/J&J are still pressing on. And before that you had Elan working on active immunization, which is still going on, and you have Lilly's other antibody, which is still going on, and Genentech's (which is still going on). No one has high hopes for any of these, but we're still burning piles of money to try to find something. And what about the secretase inhibitors? How much time and effort has gone into beta- and gamma-secretase? What did the folks at Lilly think when they took their inhibitor way into Phase III only to find out that it made Alzheimer's slightly worse instead of helping anyone? Didn't they realize that Professors Light and Lexchin were on to them? That they'd seen through the veil and figured out the real strategy of making tiny improvements on the existing drugs that attack the causes of Alzheimer's? What existing drugs to target the causes of Alzheimer are they talking about?
Honestly, I have trouble writing about this sort of thing, because I get too furious to be coherent. I've been doing this sort of work since 1989, and I have spent the great majority of my time working on diseases for which no good therapies existed. The rest of the time has been spent on new mechanisms, new classes of drugs that should (or should have) worked differently than the existing therapies. I cannot recall a time when I have worked on a real "me-too" drug of the sort of that Light and Lexchin seem to think the industry spends all its time on.
That's because of yet a