About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
February 26, 2010
This isn't exactly med-chem, but its focus probably overlaps with the interests of a number of readers around here. I recently came across a copy of A Field Guide to Bacteria and enjoyed it very much. I don't think there's another book quite like it available: it describes where you're likely to find different varieties of bacteria (from hot springs to your fridge), how they behave in a natural environment (as opposed to a culture dish) and how to identify them by field marks, if possible. It's not written for microbiologists, but it can provide a different perspective even if you work in the field (since many people that do focus on pathogens - really a very small subset of bacteria, when you get down to it).
I'm already inspired to set up some Winogradsky columns with my kids, perhaps with some unusual chemical additives to see what happens. If we discover anything, I'll report back. . .
+ TrackBacks (0) | Category: Book Recommendations | General Scientific News | Infectious Diseases
For years now, drug companies and journalists have been touted the new era of personalized medicine. This is one of those things that always seems to be arriving, but is taking its time getting here. The industry has sunk a huge pile of money into biomarker research, and it's safe to say that it hasn't paid off yet - although, at the same time, one still has to think that it should, eventually.
Nature Biotechnology has a good article that shows how tricky the whole business can be. HER2 is one of the more validated cancer biomarkers, and there's a drug (Herceptin) that's targeted specifically for breast cancer patients that express it. So how's that going? Not so well:
A recent study from the University of California, San Francisco, reveals that one in five HER2 tests gives the wrong answer1. Furthermore, the article, which reviews the medical literature, reports that as many as two-thirds of breast cancer patients who should be tested for HER2 are not, and consequently a significant fraction of women treated with Genentech's Herceptin (trastuzumab) have never been tested for HER2 overexpression.
The health benefit provider Wellpoint, of Indianapolis, might dispute that finding. According to Genentech staff scientist Mark Sliwkowski, the insurer has data showing that 98% of its breast cancer patients are tested. However, doctors differ in their views on testing before prescribing Herceptin. “Some doctors don't know how to interpret test results, they prefer just to prescribe it and assess the patient's progress,” says Michael Liebman of the patient stratification company Strategic Medicine of Kennett Square, Pennsylvania.
More than a decade after the drug received US Food and Drug Administration (FDA) approval, the personalized medicine paradigm clearly has holes. . .
That it does. As the article goes on to explain, there are doubts about how good many of the existing HER2 tests are, worries about how they don't always agree, questions about whether some HER2-negative patients might be benefiting from Herceptin anyway, and more questions about those results due to uncertainties about the tests. That's the state of the art right there, folks, and it's clear that we have a long way to go. I don't see any reason why biomarkers (of various kinds, not just genetic) won't help us figure out which patients should be getting which drugs, but don't let anyone tell you that we're there yet.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Regulatory Affairs
More job loss news to report, unfortunately. Pharmalot comes in with an item that fits with what I'm hearing, that Sanofi-Aventis appears to be making small cuts, over and over, at its various sites. There hasn't been a single big announcement that I've heard, but the company seems to be shrinking headcount nonetheless.
And I've also heard recently that Astra-Zeneca is ready to announce more layoffs, although I don't have a handle on the size. This appears to be some of the follow-through from their earlier nonspecific announcements - it looks as if they're finally going to start getting down to some details. Anyone with more information on either of these situations is welcome to add it in the comments. . .
+ TrackBacks (0) | Category: Business and Markets
February 25, 2010
Not as much time to blog this morning (and it's been hard getting into the site, since there are a lot of people who apparently want to know how to order some dioxygen difluoride). For one thing, I'm clearing a bunch of reactions out, and I've been devoting thought to how to do that in the laziest possible manner.
Maybe I should clarify that. What I mean is, how do I work up all these reactions quickly, in such a way as to make clean compounds that are worth testing, but spend the least amount of effort doing so? There are, of course, all sorts of brute-force ways to bang these things through, some of which would involve me not leaving my lab for the next three days or so, but I have other demands on my time. It's worth thinking about the most efficient way to do it.
Since these things I'm making all have acidic groups hanging off them, the most appealing idea I have right now is to use a basic resin to clean them up - as most med-chem types know, you can generally stick acidic compounds onto such resin, wash a lot of the crud off and throw that away, then bump your desired compounds off with some sort of acidic wash. This sort of solid-phase cleanup became popular in the combichem era, and has persisted for situations like this.
That's probably how I'll go, as opposed to, say, individually loading every single one of the compounds onto the HPLC machine. That would make me rather unpopular with the other people who might want to use that instrument before March is upon us, for one thing, and it would be complete overkill as well. These compounds are all pretty clean looking - a wash-and-rinse protocol should turn them out in good shape, and there's no need to use Super Ultimate Purification on them. (And besides, I'm making them all in reasonable quantity, which would bog down the HPLC even more).
An even more brainless way to do this workup would be to run every single compound through an automated column (like a Biotage). At least the HPLC has a liquid handler on it - I could set the thing up with a few rows of samples to inject, and walk away with some degree of confidence that it would run them. But the Biotage-type machines are usually one-at-a-time things, for larger samples. One batch of five grams of stuff would be perfect - two or three dozen at 100 mgs each, not so.
And all this makes me think of someone who used to work down the hall from me (no more clues than that!) I noticed that he was always cranking away in the lab, every time I went past. I mean, this guy looked like one of those multi-armed Hindu god statues, with each hand holding a round-bottom flask or a TLC plate. Impressive! Until I realized, after dealing with him a while, that the reason he was zipping around in there like a hamster was because he was doing everything in the most brutal and time-wasting way possible. He seemed to pick his reactions and protocols according to how much hand labor they involved: the more, the better.
I took a vow never to be him, and today I plan to live up to that. Measure twice, cut once and all that.
+ TrackBacks (0) | Category: Life in the Drug Labs
February 24, 2010
A correspondent writes in with an interesting and useful question: he's with a small company that believes that it's discovered a useful lead compound in an area where they're hard to find. But no one there has any experience with knocking on the doors of Big Pharma to talk about a deal, and they're wondering about the best way to go about it.
I (and people like me) can provide some general advice. But I know that there are consultants out there who've brokered things like this before and have both contacts and expertise that can help out. But that just kicks the problem along: how does a fledgling company find one of those? Some startups naturally begin with connections of this sort, but others don't. Anyone have some leads for them?
+ TrackBacks (0) | Category: Business and Markets
Well, this is interesting. Back when Steve Nissen was about to publish his meta-analysis on the safety of Avandia (rosigiltazone), he met with several GlaxoSmithKline executives before the paper came out. At the time, GSK was waiting on data from the RECORD study, which was trying to address the same problem (unconvincingly, for most observers, in the end). Nissen had not, of course, shown his manuscript to anyone at GSK, and for their part, the execs had not seen the RECORD data, since it hadn't been worked up yet.
Well, not quite, perhaps on both counts. As it happens, a reviewer had (most inappropriately) faxed a copy of Nissen's paper-in-progress to the company. And GSK's chief medical officer managed to refer to the RECORD study in such a way that it sounds as if he knew how it was coming out. How do we know this? Because Nissen secretly taped the meeting - legal in Ohio, as long as one party knows the taping is going on. At no point does anyone from GSK give any hint that they knew exactly what was in Nissen's paper. Here's some of it:
Dr. Krall asked Dr. Nissen if his opinion of Avandia would change if the Record trial — a large study then under way to assess Avandia’s risks to the heart — showed little risk. Dr. Krall said he did not know the results of Record.
“Let’s suppose Record was done tomorrow and the hazard ratio was 1.12. What does...?” Dr. Krall said.
“I’d pull the drug,” Dr. Nissen answered quickly.
The interim results of Record were hastily published in The New England Journal of Medicine two months later and showed that patients given Avandia experienced 11 percent more heart problems than those given other treatments, for a hazard ratio of 1.11. But the trial was so poorly designed and conducted that investigators could not rule out the possibility that the differences between the groups were a result of chance.
Somehow, I don't think that many pharma executives are going to agree to meetings with Nissen in his office in Cleveland after this. But I certainly don't blame him for making the tape, either.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Diabetes and Obesity | The Dark Side | Toxicology
Yesterday's "Things I Won't Work With" post has brought on calls to turn these (and some other parts of the blog) into a book. And you know, I'm game, actually - but I have no real contacts in the publishing world. If anyone out there in the readership knows a good agent, or knows someone who does, I'd be glad to have some contact information. Thanks!
+ TrackBacks (0) | Category: Blog Housekeeping
February 23, 2010
The latest addition to the long list of chemicals that I never hope to encounter takes us back to the wonderful world of fluorine chemistry. I'm always struck by how much work has taken place in that field, how long ago some of it was first done, and how many violently hideous compounds have been carefully studied. Here's how the experimental prep of today's fragrant breath of spring starts:
The heater was warmed to approximately 700C. The heater block glowed a dull red color, observable with room lights turned off. The ballast tank was filled to 300 torr with oxygen, and fluorine was added until the total pressure was 901 torr. . .
And yes, what happens next is just what you think happens: you run a mixture of oxygen and fluorine through a 700-degree-heating block. "Oh, no you don't," is the common reaction of most chemists to that proposal, ". . .not unless I'm at least a mile away, two miles if I'm downwind." This, folks, is the bracingly direct route to preparing dioxygen difluoride, often referred to in the literature by its evocative formula of FOOF.
Well, "often" is sort of a relative term. Most of the references to this stuff are clearly from groups who've just been thinking about it, not making it. Rarely does an abstract that mentions density function theory ever lead to a paper featuring machine-shop diagrams, and so it is here. Once you strip away all the "calculated geometry of. . ." underbrush from the reference list, you're left with a much smaller core of experimental papers.
And a hard core it is! This stuff was first prepared in Germany in 1932 by Ruff and Menzel, who must have been likely lads indeed, because it's not like people didn't respect fluorine back then. No, elemental fluorine has commanded respect since well before anyone managed to isolate it, a process that took a good fifty years to work out in the 1800s. (The list of people who were blown up or poisoned while trying to do so is impressive). And that's at room temperature. At seven hundred freaking degrees, fluorine starts to dissociate into monoatomic radicals, thereby losing its gentle and forgiving nature. But that's how you get it to react with oxygen to make a product that's worse in pretty much every way.
FOOF is only stable at low temperatures; you'll never get close to RT with the stuff without it tearing itself to pieces. I've seen one reference to storing it as a solid at 90 Kelvin for later use, but that paper, a 1962 effort from A. G. Streng of Temple University, is deeply alarming in several ways. Not only did Streng prepare multiple batches of dioxygen difluoride and keep it around, he was apparently charged with finding out what it did to things. All sorts of things. One damn thing after another, actually:
"Being a high energy oxidizer, dioxygen difluoride reacted vigorously with organic compounds, even at temperatures close to its melting point. It reacted instantaneously with solid ethyl alcohol, producing a blue flame and an explosion. When a drop of liquid 02F2 was added to liquid methane, cooled at 90°K., a white flame was produced instantaneously, which turned green upon further burning. When 0.2 (mL) of liquid 02F2 was added to 0.5 (mL) of liquid CH4 at 90°K., a violent explosion occurred."
And he's just getting warmed up, if that's the right phrase to use for something that detonates things at -180C (that's -300 Fahrenheit, if you only have a kitchen thermometer). The great majority of Streng's reactions have surely never been run again. The paper goes on to react FOOF with everything else you wouldn't react it with: ammonia ("vigorous", this at 100K), water ice (explosion, natch), chlorine ("violent explosion", so he added it more slowly the second time), red phosphorus (not good), bromine fluoride, chlorine trifluoride (say what?), perchloryl fluoride (!), tetrafluorohydrazine (how on Earth. . .), and on, and on. If the paper weren't laid out in complete grammatical sentences and published in JACS, you'd swear it was the work of a violent lunatic. I ran out of vulgar expletives after the second page. A. G. Streng, folks, absolutely takes the corrosive exploding cake, and I have to tip my asbestos-lined titanium hat to him.
Even Streng had to give up on some of the planned experiments, though (bonus dormitat Strengus?). Sulfur compounds defeated him, because the thermodynamics were just too titanic. Hydrogen sulfide, for example, reacts with four molecules of FOOF to give sulfur hexafluoride, 2 molecules of HF and four oxygens. . .and 433 kcal, which is the kind of every-man-for-himself exotherm that you want to avoid at all cost. The sulfur chemistry of FOOF remains unexplored, so if you feel like whipping up a batch of Satan's kimchi, go right ahead.
Update: note that this is 433 kcal per mole, not per molecule (which would be impossible for even nuclear fission and fusion reaction (see here for the figures). Chemists almost always thing in energetics in terms of moles, thus the confusion. It's still a ridiculous amount of energy to shed, and you don't want to be around when it happens.
So does anyone use dioxygen difluoride for anything? Not as far as I can see. Most of the recent work with the stuff has come from groups at Los Alamos, where it's been used to prepare national-security substances such as plutonium and neptunium hexafluoride. But I do note that if you run the structure through SciFinder, it comes out with a most unexpected icon that indicates a commercial supplier. That would be the Hangzhou Sage Chemical Company. They offer it in 100g, 500g, and 1 kilo amounts, which is interesting, because I don't think a kilo of dioxygen difluoride has ever existed. Someone should call them on this - ask for the free shipping, and if they object, tell them Amazon offers it on this item. Serves 'em right. Morons.
+ TrackBacks (0) | Category: Things I Won't Work With
February 22, 2010
The Senate report that leaked on Avandia (rosiglitazone) over the weekend has made plenty of headlines. It quotes an internal FDA report that recommends flatly that the drug be removed from the market, since its beneficial effects can be achieved by use of the competing PPAR drug Actos (pioglitazone), which doesn't seem to have the same cardiovascular risks. The two drugs have been compared (retrospectively) head to head, and Avandia definitely seems to have come out as inferior due to safety concerns.
There had been worries for several years about side effects, but the red flag went up for good in 2007, and the arguing has not ceased since then. According to another FDA document in the Senate report, there are "multiple conflicting opinions" inside the agency about what to do. The agency ordered GSK to set up a prospective head-to-head trial of Avandia and Actos, but other staffers insist that the whole idea is unethical. If the cardiovascular risks are real, they argue, then you can't expose people to Avandia just to find out how much worse it is. The trial is enrolling patients, but will take years to generate data, and Avandia will be generic by the time it reports, anyway. (Presumably, the only reason GSK is running it is because the drug would be taken off the market for sure if they didn't).
The FDA's internal debate is one issue here (as is the follow-up question about whether the agency should be restructured to handle these questions differently). But another one is GlaxoSmithKline's response to all the safety problems. Says that New York Times article:
In 1999, for instance, Dr. John Buse, a professor of medicine at the University of North Carolina, gave presentations at scientific meetings suggesting that Avandia had heart risks. GlaxoSmithKline executives complained to his supervisor and hinted of legal action against him, according to the Senate inquiry. Dr. Buse eventually signed a document provided by GlaxoSmithKline agreeing not to discuss his worries about Avandia publicly. The report cites a separate episode of intimidation of investigators at the University of Pennsylvania.
GlaxoSmithKline said that it “does not condone any effort to silence” scientific debate, and that it disagrees with allegations that it tried to silence Dr. Buse. Still, it said the situation “could have been handled differently.”
Well, yeah, I should think so. I don't know what the state of the evidence was as early as 1999, but subsequent events appear to have vindicated Buse and his concerns. And while you can't just sit back and let everyone take shots at your new drug, you also have to be alert to the possibility that some of the nay-sayers might be right. We honestly don't know enough about human toxicology to predict what's going to happen in a large patient population very well, and companies need to be honest with the public (and themselves) about that.
+ TrackBacks (0) | Category: Diabetes and Obesity | Regulatory Affairs | Toxicology
The New York Times is starting a series of articles on the clinical trials of a recent B-Raf inhibitor (from Plexxikon and Roche, PLX4032). The first installment is an excellent look at what early-stage clinical research is like in this field. For example:
Typically, Phase 1 trials are limited to a few dozen patients and end when the dose reaches the point where side effects like rashes and diarrhea make patients too uncomfortable.
Dr. Flaherty and Dr. Chapman started the first three patients on 200 milligrams per day. After two months with no side effects — and no response — they doubled it.
Two more months passed, still nothing. They gave three more patients 800 milligrams, the equivalent of the dose that made tumors stop growing in mice. Even shrinking tumors, the doctors knew, would not mean the cancer had been cured but might at least offer a reprieve.
Dr. Flaherty pounced on the scans when they arrived. In some patients, tumors had remained the same size. “Maybe we’re starting to see something,” he could not help thinking. But at the next set of scans, the disease had progressed. On conference calls, Dr. Nolop sometimes referred to those patients as “responders.”
“They’re not responders,” Dr. Flaherty gently corrected him: under the accepted definition, tumors had to shrink to qualify patients as responders.
By the time they had doubled the dose four times, Dr. Flaherty could not help wondering if the targeted therapy skeptics were right. Dr. Chapman, crisp and businesslike on the weekly calls, supplied no comfort. He pointed out new research that B-RAF was mutated even in benign moles, and therefore could not be the key driver in melanoma. . .
What everyone involved in this work has to deal with is living between two very different mental states: you have to see people who are dying, and who you will probably not be able to help, even with your best efforts. But it's also possible that the next new thing you try might be the thing that keeps some of them alive. It's a hard place to work.
Back here in early research we don't see the patients, of course (which is good, since I'm pretty sure I couldn't take it). But we also have the same narrow path to walk: most of the compounds we make aren't drug candidates. Most of the drug candidates we send on for development fail. But the answer to that is not to stop making drug candidates, because every so often, something works.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
February 19, 2010
A double complaint this morning, and both from the same literature item - if I were charging anything for the blog, I'd say that it's delivering value for the money. At any rate, the first kvetch is something that I know that many chemists have noticed when reading more biology/medical-oriented journals. You'll see some paper that talks about a new compound that does X, Y, and Z. It'll be named with some sort of code, and they'll tell you all about its interesting effects. . .but they don't get around to actually telling you what the damned stuff is.
As I say, this is a chemist's complaint. Many biologists are fine stipulating that there's a compound that will do these interesting things, because they're mostly interested in hearing about the interesting things themselves. It could just be Compound X as far as they're concerned. But chemists want to see what kind of structure it is that causes all these publication-worthy results, and sometimes we go away disappointed.
Or we have to dig. Take this PNAS paper on a broad-spectrum antiviral compound, LJ001. It looks quite interesting, with effects on a number of different viral types, and through a unique mechanism that targets viral membranes. But what is it? You'll look in vain through the whole paper to find out - that compound is LJ001 to you, Jack. You have to go to the supplemental material to find out, and to page 10 at that.
And that brings up the second complaint. LJ001 turns out to be a rhodanine, and regular readers will note that earlier this month some time was spent here talking about just how ugly and undesirable those are. It's very, very hard to get anyone in the drug business to take a compound in that class seriously, because they have such a poor track record. Looking over the small SAR table provided, I note that if you switch that thioamide group (the part that the chemists hate the most) to a regular amide, turning the thing into an thiazolidinedione, you lose all the activity.
TZDs aren't everyone's favorite group, but at least they've made it into marketed drugs. Rhodanines are no one's favorite group, and it would be a good thing of the authors of these papers would realize that, or at least acknowledge it if they do. It's not an irrational prejudice.
+ TrackBacks (0) | Category: Drug Assays | The Scientific Literature
February 18, 2010
I've been meaning to write about this paper in PNAS for a while. The authors (from Cal Tech and the Weizmann Institute) have set up a new web site, are calling for a more quantitative take on biological questions. They say that modern techniques are starting to give up meaningful inputs, and that we're getting to the point where this perspective can be useful. A web site, Bionumbers, has been set up to provide ready access to data of this sort, and it's well worth some time just for sheer curiosity's sake.
But there's more than that at work here. To pick an example from the paper, let's say that you take a single E. coli bacterium and put it into a tube of culture medium, with only glucose as a carbon source. Now, think about what happens when this cell starts to grow and divide, but think like a chemist. What's the limiting reagent here? What's the rate-limiting step? Using the estimates for the size of a bacterium, its dry mass, a standard growth rate, and so on, you can arrive at a rough figure of about two billion sugar molecules needed per cell division.
Of course, bacteria aren't made up of glucose molecules. How much of this carbon got used up just to convert it to amino acids and thence to proteins (the biggest item on the ledger by far, it turns out), to lipids, nucleic acids, and so on? What, in other words, is the energetic cost of building a bacterium? The estimate is about four billion ATPs needed. Comparing that to those two billion sugar molecules, and considering that you can get up to 30 ATPs per sugar under aerobic conditions, and you can see that there's a ten to twentyfold mismatch here.
Where's all the extra energy going? The best guess is that a lot of it is used up in keeping the cell membrane going (and keeping its various concentration potentials as unbalanced as they need to be). What's interesting is that a back-of-the-envelope calculation can quickly tell you that there's likely to be some other large energy requirement out there that you may not have considered. And here's another question that follows: if the cell is growing with only glucose as a carbon source, how many glucose transporters does it need? How much of the cell membrane has to be taken up by them?
Well, at the standard generation time in such media of about forty minutes, roughly 10 to the tenth carbon atoms need to be brought in. Glucose transporters work at a top speed of about 100 molecules per second. Compare the actual surface area of the bacterial cell with the estimated size of the transporter complex. (That's about 14 square nanometers, if you're wondering, and thinking of it in those terms gives you the real flavor of this whole approach). At six carbons per glucose, then, it turns out that roughly 4% of the cell surface must taken up with glucose transporters.
That's quite a bit, actually. But is it the maximum? Could a bacterium run with a 10% load, or would another rate-limiting step (at the ribosome, perhaps?) make itself felt? I have to say, I find this manner of thinking oddly refreshing. The growing popularity of synthetic biology and systems biology would seem to be a natural fit for this kind of thing.
It's all quite reminiscent of the famous 2002 paper (PDF) "Can A Biologist Fix a Radio", which called (in a deliberately provocative manner) for just such thinking. (The description of a group of post-docs figuring out how a radio works in that paper is not to be missed - it's funny and painful/embarrassing in almost equal measure). As the author puts it, responding to some objections:
One of these arguments postulates that the cell is too complex to use engineering approaches. I disagree with this argument for two reasons. First, the radio analogy suggests that an approach that is inefficient in analyzing a simple system is unlikely to be more useful if the system is more complex. Second, the complexity is a term that is inversely related to the degree of understanding. Indeed, the insides of even my simple radio would overwhelm an average biologist (this notion has been proven experimentally), but would be an open book to an engineer. The engineers seem to be undeterred by the complexity of the problems they face and solve them by systematically applying formal approaches that take advantage of the ever-expanding computer power. As a result, such complex systems as an aircraft can be designed and tested completely in silico, and computer-simulated characters in movies and video games can be made so eerily life-like. Perhaps, if the effort spent on formalizing description of biological processes would be close to that spent on designing video games, the cells would appear less complex and more accessible to therapeutic intervention.
But I'll let the PNAS authors have the last word here:
"It is fair to wonder whether this emphasis on quantification really brings anything new and compelling to the analysis of biological phenomena. We are persuaded that the answer to this question is yes and that this numerical spin on biological analysis carries with it a number of interesting consequences. First, a quantitative emphasis makes it possible to decipher the dominant forces in play in a given biological process (e.g., demand for energy or demand for carbon skeletons). Second, order of magnitude BioEstimates merged with BioNumbers help reveal limits on biological processes (minimal generation time or human-appropriated global net primary productivity) or lack thereof (available solar energy impinging on Earth versus humanity’s demands). Finally, numbers can be enlightening by sharpening the questions we ask about a given biological problem. Many biological experiments report their data in quantitative form and in some cases, as long as the models are verbal rather than quantitative, the theor y will lag behind the experiments. For example, if considering the input–output relation in a gene-regulatory net work or a signal- transduction network, it is one thing to say that the output goes up or down, it is quite another to say by how much..
+ TrackBacks (0) | Category: Biological News | Who Discovers and Why
February 17, 2010
Merck reported earnings this week, and dropped the other shoe: they're going to make the Schering-Plough merger work by trimming the head count at the company by 15%. Where are the cuts coming from?
As of Dec. 31, 2009, Merck had approximately 100,000 employees. As part of the first phase of its Merger Restructuring Program, by the end of 2012, Merck expects to reduce its total workforce by approximately 15 percent across all areas of the combined company worldwide. The company also plans to eliminate approximately 2,500 vacant positions as part of the first phase of the program. The reductions will primarily come from the elimination of duplicative positions in sales, administrative and headquarters organizations, as well as from the consolidation of certain manufacturing facilities and research and development operations.
Merck said that certain actions, such as the ongoing reevaluation of manufacturing and research and development facilities worldwide, have not yet been completed, but will be included later this year in other phases of the Merger Restructuring Program. Merck also said it will continue to hire new employees in strategic growth areas of the business throughout this period.
Well, OK, the cuts are coming from. . .everywhere. Looks like sales and administration will be first, since those are the easiest to figure out, with R&D coming along later. It doesn't appear that there's any hard information to be had, which doesn't give anyone much to work with. Basically, everyone in research at Merck can, I suppose, just hang out for the rest of the year waiting to hear something. That'll crank up the productivity, as the scientists at Pfizer, GSK, AstraZeneca et al. can tell you.
+ TrackBacks (0) | Category: Business and Markets
The Wall Street Journal has a good article on drug patents in India. Many readers will remember the days when those three words didn't have much chance of appearing together in a sentence, but that all changed in 2005, when the country changed its laws to recognize chemical substances as well as process patents:
When India finally adopted its expanded patent law, it was widely hailed and multinational firms began expanding with gusto. They now sell their latest branded medicines here, expecting the burgeoning middle class and slowly growing health insurance system will pay for them. Pharmaceutical manufacturing has boomed, as has the clinical trial industry.
But little noticed at the time was that the new law sets a higher bar than Europe and the U.S. for approving patents, says D.G. Shah, head of the Indian Pharmaceutical Alliance, a Mumbai-based industry group.
Among the tougher provisions is one that says patents be will granted only when products are more efficacious—a provision the Indian patent office has used to deny several patents, he says.
Improved efficacy, as you might imagine, can turn out to be in the eye of the beholder. Glivec, Tarceva, Viread, and (most recently) Nexavar are drugs that have fallen into this particular pothole. And I have to say, hearing someone from the Indian drug industry lecture about patent quality is a bit hard to take. ((Update: why is that, you ask? Look here for one explanation).
"The U.S. would grant a patent to a piece of toilet paper," says Amar Lulla, chief executive of Cipla, the Indian generics drugmaker. "Just because the U.S. granted a patent, doesn't mean it should be valid."
In its Tuesday decision to dismiss Bayer's appeal, the Delhi High Court made a blistering attack on the company's efforts to block copies of its cancer medicine Nexavar. Calling the appeal "a speculative foray," the court added that "the petitioner, no doubt, is possessed of vast resources and can engage in such pursuits."
Now, I'm not saying that we don't have some poor quality patents. Every country's patent office has allowed junk to issue; the key thing is to try to cut the junk down to a minimum. But I still can't quite figure out what the Indian courts are up to (other than protecting their own generic industry and forcing down the price of drugs, of course). Take a look at this story from the Times of India on the recent Nexavar ruling. I realize that it's hard to tell if it's a news item or an editorial, but have at it anyway.
That article decries "patent linkage", which seems to be the idea that marketing approval for a drug might have something to do with its patent status. Well. . .I sort of thought it was supposed to, when it comes to generic versions. What else is a patent good for, if not for a period of exclusivity? If I'm interpreting this correctly, the Indian courts regard drug patents as granting an exclusive period for a company to market its drug under its own particular brand name - but anyone else can hop right in with the same substance, of course; the patent makes no difference there.
Am I grasping this correctly? If not, I'll be glad to be set straight. And if I am, then what, exactly, is Indian drug patent law supposed to accomplish? And why would any drug-inventing company be so foolish as to rely on it?
+ TrackBacks (0) | Category: Business and Markets | Patents and IP
February 16, 2010
Update: fixed formatting problems with the post. Not sure what happened!
The wrangling over this issue has been fierce, and now it's even more so. Here's another paper from the UK, just coming out today, that has found no association between patients diagnosed with Chronic Fatigue Syndrome and xenotropic murine leukaemia virus-related virus. The bottom line:
In summary, we have studied 299 DNA samples and 565 serum samples for evidence of XMRV infection. We have not identified XMRV DNA in any samples by PCR, however, some serum samples were able to neutralise XMRV reactivity in our assay. Only one of these positive sera came from a CFS patient, implying that there is no association between XMRV infection and CFS.
I have, as they say, no dog in this fight, so I'm going to sit back while things get sorted out. But something clearly needs to get sorted, because there are claims in this area that seem to be completely irreconcilable.
+ TrackBacks (0) | Category: Infectious Diseases
I hesitate to open up the whole topic of health care reform again. (For the record, I think that the bills, in their current forms, are dead). But one thing that struck me was how early the pharma industry trade group PhRMA got involved in the negotiations, and the sort of deal that they struck.
I note that Billy Tauzin, head of the organization at the time, seems to have been instrumental in those negotiations, and that he has now left his position. You can read this account of the whole affair, courtesy of the Sunlight Foundation, and decide to what extent those two statements are related. I also note that, if that article is accurate, at least $100 million dollars was spent by the trade group in the process.
I get emails from people at PhRMA once in a while. I'm expecting the next one by this afternoon. . .
+ TrackBacks (0) | Category: Current Events | Regulatory Affairs
A reader sends along this item about St. Louis University starting a research institute to try to pick up some of the ex-Pfizer people from the area. It's not going to make a big impact, at least at first (if you read the article, you notice that they're starting out by hiring 12 people). But it's better than nothing, and it's a path that a number of ex-industry folks have been able to follow.
I have to applaud the academic institutions that see drug industry employees as a valuable resource. What I wonder about, though, is if Washington University (which overshadows SLU in that region) has picked anyone up - anyone from the St. Louis area have any details? As I say, this isn't going to solve the employment crisis we're going through, but it can't hurt - and by keeping a research culture alive in some places where it's been getting thinned out, it might lead to some start-up companies when the financial situation clears up.
+ TrackBacks (0) | Category: Business and Markets
So I have a number of people trying to set me straight about Twitter. . .well, I'll see what I can do. For some time I've had it set up just to take 140 characters off the top of each post I do here, to serve as a sort of "I've posted something" alert, and that'll continue. I hardly follow anyone there, true. . .and many of them are non-chemical sources (Iranian politics and the like). What I probably need to do is set up more than one Twitter account, with one reserved for blogging and science. But even then, I don't see how I'll have time to look at it during the day, so people who fire messages back to me via Twitter are still going to come away disappointed.
+ TrackBacks (0) | Category: Blog Housekeeping | The Scientific Literature
February 12, 2010
My schedule is all over the place today - events at my kids' school, new projects at work, etc. But I do want to put a quick question out to people: I keep seeing various scientific journals, etc. proudly advertising that they're on Twitter, Facebook, etc. So, does anyone get any use out of that? I can't say that I do, but perhaps I'm just set in my ways, if reading journals by RSS feeds can be called "set in my ways".
I'm willing to be set straight on this, but whenever I see these logos and notices, I can't help but see some editorial meeting that I imagine went on. "Look, everybody's on Twitter, says one person around the table. "Well, I'm not," says an editor, "and I can't for the life of me figure out why I should be. Won't we look idiotic 'tweeting' at people, or whatever it is?" "Did I mention that it won't cost us anything to get all Web 2.0-ed up?" says the first guy, and the motion is carried. . .
+ TrackBacks (0) | Category: The Scientific Literature
February 11, 2010
The company says that it cut 7% out of the R&D budget last year, and looks to keep on this same path this year. And that goes for merger and acquisition activity, although they seem to be staying away from the Great Big Deals. Here's a statement from their CEO in a recent interview:
“The best predictor of what we’ll do in 2010 is what we’ve done in 2009,” Viehbacher said in an interview in Paris. “When you do smaller to mid-sized deals, it is easier to search, complete a deal, and then hand it over to your line management and move on to the next deal. It’s when you do a big deal that the whole company gets bogged down in deciding whose e-mail system you’re going to use or where headquarters are going to be.”
All of this brings up a question. I've been hearing talk (which I haven't been able to verify) of re-orgs and layoffs in their US research sites recently. Anyone have any details?
+ TrackBacks (0) | Category: Business and Markets | Current Events
I should also note that the Royal Society of Chemistry is starting its own med-chem communications journal. MedChemComm. Along with the new ACS journal, this now means that medicinal chemists have more places to publish their work than ever before.
Which is a bit of a sour thought, considering that the number of industry-employed medicinal chemists has been dropping for several years now, and the end does not appear to be in sight. We'll see how this affects the publishing world (admittedly, a minor worry). In the short term, people are probably trying to make their patent and publication records look as impressive as possible, so I would think that fewer and fewer publishable results are sitting around in desk drawers. In the long term, though, are we going to see fewer papers in general? (Or failing that, more from academic labs?)
+ TrackBacks (0) | Category: The Scientific Literature
February 10, 2010
We've been talking a lot around here about small companies versus large ones, the merits of different therapeutic areas, and so on. So here's a question: if you were starting a small drug company today, where would you concentrate its efforts?
Oncology? Ten years ago, you could make that case, I think. But now everyone's piled into the area, so you'd have to have a real edge to make a go of it. For one thing, finding patients for clinical trials is a major problem. Your best shot here would be really obscure varieties of cancer, I'd think, unless you've got something really major. And how do you ever know if you've got something really major or not in this area until you get to the clinic, anyway?
Anti-infectives? There's certainly room for some new niche products here, but that's what they're going to be. And this is a surprisingly difficult area to make headway in, if you haven't worked in it before. Nothing's going to be an almighty blockbuster here (because nothing new is going to be a frontline therapy), but there is money to be made.
Cardiovascular and metabolics? I don't see how, and I barely see why, unless you've got the miracle HDL-raising pill up your sleeve. Diabetes, for its part, has been a fine area over the last ten or twenty years, but the safety criteria for a new therapy are now very stiff (and the market is pretty well covered, from several different angles). Not recommended, I'd say.
Alzheimer's? Good luck! Man, is there ever an unserved market here, but it's unserved for a lot of damned good reasons. The same goes for a number of other CNS indications. This whole area is a tightrope of risk and reward. Both are huge.
Or would you go the Genzyme route, making huge amounts by helping out people (a few people) that no one else can help at all? Again, this presupposes that you have some really good idea about how to approach these orphan diseases, and it's going to be tough to make a whole company out of them (since they're spread over such disparate therapeutic specialties). But this would seem to be feasible, with some luck.
Suggestions are welcome in the comments. I'm definitely not planning on starting a company myself, but I think that we need as many as possible, and perhaps some ideas will trigger something for someone in a position to act. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
Here's a quick question for those of you that order a lot of odd little compounds. A correspondent tells me that he's been ordering resupplies from some of the usual suspects in this area (ChemBridge, ChemDiv - you know the sorts of companies, if you're in the med-chem business). And a higher than usual percentage of compounds are coming back as "Unavailable". . .only to show up available, at a significantly higher price, from Aurora.
Now, I certainly don't know the business arrangements between all these companies. I know that some of the compounds themselves are clearly coming from the same original sources, often somewhere in Russia, and make their way into a number of catalogs at once. But is this Aurora business a coincidence. . .or a business model? Anyone seen this happen personally?
+ TrackBacks (0) | Category: Life in the Drug Labs
February 9, 2010
Friday's post has brought in a lot of comments, and they're still piling up. I wanted to address a few of the more frequent ones, though, out here on the front page.
First off, the idea that a bunch of stock analysts could have a useful opinion on a pharma company's return on investment doesn't seem to strike many people as plausible. Variations on "What do they know about this business?" and "Aren't these the same geniuses that wiped out the mortgage bond market?" have come up numerous times. My answer to the latter is no, they aren't. The stock and industry analysts are a different bunch entirely. That's not to say that they can't be stupid, or make mistakes (they do!) But these aren't the people who thought that they had all the risks figured for interest-rate swaps and collateralized debt obligations. If you have disagreements with industry analysts, then you should fight in their territory.
There's more substance to the "What do they know" objection, but still (in my view) not enough. What they know is what's been made public, of course, and as we in the industry know, that's not everything. But that doesn't make Wall Street's case any weaker this time, as far as I can tell. Morgan Stanley and their ilk are not missing any of the successful projects from inside big pharma - those all get aired out thoroughly. If they're short on data, it's on how many projects fail, and how much they cost, and those numbers aren't going to make the ROI look any better. Meanwhile, most all the inlicensed compounds actually get announced, since they're material transactions for someone, so far fewer of those escape notice. I don't like the Morgan Stanley point of view, not at all, but dislike is not a refutation.
Another thing to remember is that the people with the best figures on ROI are the upper management of the companies involved, and these are the people who are slashing head count and outsourcing wherever they can. And we have to make a distinction here, between diagnosis and treatment. We can disagree on whether this is the proper response (although I'm kind of stuck for alternatives), but is it still possible to argue that these CEOs and the like are reacting to something that isn't there? Something is precipitating a lot of large, painful, and nasty decisions, and I think that it's probably the very concerns about cost that we've been talking about. We need to separate the argument about whether those figures are real from the argument about what's been done in response.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
February 8, 2010
Well, I have no particular need to make azo-linked compounds (see this morning's post for one reason!). And I have to say, although it's mechanistically interesting, I definitely feel no desire to make them by combining a hydroperoxide and a diazonium salt in one pot. This is not a moment destined to take its place alongside the legendary invention of the chocolate/peanut butter cup.
+ TrackBacks (0) | Category: Chemical News
There's an article out from a group in Australia on the long-standing problem of "frequent hitter" compounds. Everyone who's had to work with high-throughput screening data has had to think about this issue, because it's clear that some compounds are nothing but trouble. They show up again and again as hits in all sorts of assays, and eventually someone gets frustrated enough to flag them or physically remove them from the screening deck (although that last option is often a lot harder than you'd think, and compound flags can proliferate to the point that they get ignored).
The larger problem is whether there are whole classes of compounds that should be avoided. It's not an easy one to deal with, because the question turns on how you're running your assays. Some things are going to interfere with fluorescent readouts, by absorbing or emitting light of their own, but that can depend on the wavelengths you're using. Others will mung up a particular coupled assay readout, but leave a different technology untouched.
And then there's the aggregation problem, which we've only really become aware of in the past few years. Some compounds just like to stick together into huge clumps, often taking the assay's protein target (or some other key component) with them. At first, everyone thought "Ah-hah! Now we can really scrub the screening plates of all the nasties!", but it turns out that aggregation itself is an assay-dependent phenomenon. Change the concentrations or added proteins, and whoomph: compounds that were horrible before suddenly behave reasonably, while a new set of well-behaved structures has suddenly gone over to the dark side.
This new paper is another attempt to find "Pan-Assay Interference" compounds or PAINs, as they name them. (This follows a weird-acronym tradition in screening that goes back at least to Vertex's program to get undesirable structures out of screening collections, REOS, for "Rapid Elimination of, uh, Swill"). It will definitely be of interest to people using the AlphaScreen technology, since it's the result of some 40 HTS campaigns using it, but the lessons are worth reading about in general.
What they found was that (as you'd figure) that while it's really hard to blackball compounds permanently with any degree of confidence, the effort needs to be made. Still, even using their best set of filters, 5% of marketed drugs get flagged as problematic screening hits - in fact, hardly any database gives you a warning rate below that, with the exception of a collection of CNS drugs, whose properties are naturally a bit more constrained. Interestingly, they also report the problematic-structure rate for the collections of nine commercial compound vendors, although (frustratingly) without giving their names. Several of them sit around that 5% figure, but a couple of them stand out with 11 or 12% of their compounds setting off alarms. This, the authors surmise, is linked to some of the facile combinatorial-type reactions used to prepare them, particularly ones that leave enones or exo-alkenes in the final structures.
So what kinds of compounds are the most worrisome? If you're going to winnow out anything, you should probably start with these: Rhodanines are bad, which doesn't surprise me. (Abbott and Bristol Myers-Squibb have also reported them as troublesome). Phenol Mannich compounds and phenolic hydrazones are poor bets. And all sort of keto-heterocycles with conjugated exo alkenes make the list. There are several other classes, but those are the worst of the bunch, and I have to say, I'd gladly cross any of them off a list of screening hits.
But not everyone does. As the authors show, there are nearly 800 literature references to rhodanine compounds showing biological effects. A conspicuous example is here, from the good folks at Harvard, which was shown to be rather nonspecifically ugly here. What does all this do for you? Not much:
"Rather than being privileged structures, we suggest that rhodanines are polluting the scientific literature. . .these results reflect the extent of wasted resources that these nuisance compounds are generally causing. We suggest that a significant proportion of screening-based publications and patents may contain assay interference hits and that extensive docking computations and graphics that are frequently produced may often be meaningless. In the case of rhodanines, the answer set represents some 60 patents and we have found patents to be conspicuously prevalent for other classes of PAINS. This collectively represents an enormous cost in protecting intellectual property, much of which may be of little value. . ."
+ TrackBacks (0) | Category: Drug Assays | Drug Industry History | The Scientific Literature
February 5, 2010
I hate to do another post on this subject, after a good part of the week has been devoted to layoff news and the like, but this one is too much to ignore. A reader sent along this link, which quotes a Morgan Stanley appraisal of the pharma industry as an investment. Here's what they're telling their clients:
". . .Still significant value in Pharma - we see material upside to ROIC [return on invested capital], earnings and multiples as Pharma withdraws from most internal small-molecule research and reallocates capital to in-licensing and other non-pharma assets. Worsening generic pressure and R&D management changes lead us to expect material cuts to internal small research spend (~40% total R&D) in 2010/11, after a decade of dismal internal R&D returns. We expect AstraZeneca and Sanofi-Aventis to be among the leaders in externalizing research, and this is a key driver of our upgrade of AstraZeneca today to Overweight.
Reinvestment of internal research savings into in-licensing will yield three times the likely return, we calculate. Under in-licensing deals, downside risk for pharma companies is currently materially lower than for internally developed drugs. Although upside is also capped by pay-aways and milestone obligations, the net present value of these payments is more than offset by the lower risk-adjusted invested capital. Over one-third of pharma R&D spend is in pre-phase II, where the probability of reaching the market is <10%. our proprietary analysis indicates that, unless the probability of an in-house molecule reaching the market is 30% or more, the risk-adjusted economic value added, or eva, is three times higher under the external research model, with a greater predictability."
It could be said in fewer words, but it's all there. If you're looking for the reason the big companies are doing what they're doing, look no further. Agree with it or not, there's a case to be made - and there's Morgan Stanley, making it - that the cost of running new drug projects in big pharma is just too high relative to the risks of failure. Those returns, in fact, are calculated to be off by a factor of three.
You may not believe that factor, and I have to say, I found it hard to believe myself. But let's say the Morgan Stanley folks have their numbers off. Perhaps it's only twice as profitable to bring in outside drugs as it is to develop them internally. Don't believe that one, either? Maybe it's only 25% more profitable - can you imagine making a move that would increase your company's return on investment by 25%? Industries get remade by such changes at the margin, and this one is remaking ours. Why do we have any internal R&D left at all, if those figures are anywhere near right?
Well, no one's tried to run a large company entirely by in-licensing, and I think that there are a lot of reasons why that wouldn't work. (For one thing, I don't think that there are enough things to in-license, and if one or more large companies announced that they were doing that exclusively, the price of each deal would go right up). And there needs to be some internal expertise left, if only to evaluate those external drug candidates to make sure you're not being taken. But still. All this means is that internal R&D will stay around, but it has to get cheaper and will very likely get smaller.
We can argue about the assumptions behind all this, but there's no doubt that a compelling business case can be made for this world view. Anyone who wants to argue differently - and a lot of us do - will have to come up with solid numbers and reasoning for why it just ain't so. I'm not sure such numbers exist.
There are many corollaries to this line of thought. One of them - and I hate to bring this up, considering all the horrible layoff news recently - is that one of the most psychologically comforting theories that we in R&D have for our present fix is likely wrong. I refer to the "Evil Clueless MBA CEO" theory, which has its satisfactions, but is a hazardous way to think. It is always dangerous to assume that people who do things you disagree with are doing them because they're just idiots or because they're innately malicious. In general, I'd say that the first explanation to jettison is malice, followed by stupidity (Hanlon's Razor). What that leaves you with is that these actions, stupid and malicious though they may appear, are probably being done for reasons that appear valid to the people doing them. I know, I know - some of these reasons are things like "So I can keep my high-paying CEO job", and we can't ignore that one. But a good way to lose a high-paying CEO job is to try to tell your board of directors (and your shareholders) why you're going to pass up an opportunity to get three times your ROIC.
Another thing to think about is, if these cost estimates are right, how did we get here? The best reason I can think of for such a disparity is that small companies (the source of these in-licensed drugs and projects) are often betting their entire existence on these ideas. They are very strongly motivated to do whatever they can do to get them to work (sometimes a bit too motivated, but that risk is already factored in), and if things don't pan out, they usually disappear. Basically, the in-licensing world unloads the risk from the large pharma company (and its shareholders) onto the investors in the smaller ones. The cost disparity will exist for as long as people are willing to back smaller companies. Now, this isn't to say that the big companies are always going to do a great job picking what to bring in. We've been talking a lot, for example, about the GSK-Sirtris deal, and that one may or may not work out. But the idea of doing big in-licensing deals in general - that's a different story, no matter how any individual company manages to execute it.
What that also means is that more of us are going to end up working for those smaller companies (which is something that I, and several commenters around here, have been saying for a while). If the large pharma outfits are going to devote more money to in-licensing, there will then be more opportunities for people developing things for them to in-license. The rough part is that all these structural changes in the drug industry are taking place (largely by coincidence, I think) during economic conditions which make funding such companies difficult.
And then there's the internal cost-cutting, for the R&D that's actually staying at the big companies. That, of course, generally means sending a lot of it to China, or wherever else it can be done more cheaply. And that's going to continue as long as it can indeed be done more cheaply, which means "not forever". Costs are already rising in China and India, although they have a good ways to go before they catch up to the US and Europe. I know that we can argue about how well that whole idea is going to work - there are clearly inefficiencies to doing a lot of your work through outsourcing, but as long as those don't eat up all the cost savings, it's still going to keep happening.
This, as a side note, is why I think that one of the suggestions that gets floated here in the comments from time to time, the idea of forming a "medicinal chemist's union", is completely useless. Unions form when workers have the leverage to preserve a higher-cost business model. In the end, the big industrial concerns of the early 20th century had to have workers, and they had to have them in certain locations, so the unions always had the threat of going on strike. At attempt to lower the boom under these conditions would result in everything going to China, and damned quickly.
So. . .what's happening to us, and to our industry, is not really mysterious. Our cost structure does not look to be supportable, and since there are cheaper alternatives that appear to be feasible, those will get tried. The disruption and destruction that all this is causing is real, of course. But the best I can offer is to try to understand what's driving all this upheaval, because that might help people to figure out how to protect their own jobs or where to jump next. Everyone has to give this some serious thought, because I don't see any reason why all this won't keep going on for some time to come.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
February 4, 2010
On another front, we now have an ex-BMS associate scientist who's apparently been arrested for stealing company materials in preparation for starting his own company back in India. I presume he was planning to get into the advanced pharmaceutical intermediates business (or perhaps the biotech end of it), using as much proprietary information as he could download in order to get a quick leg up. The company's security folks seem to have flagged him over the Christmas break, and he's since been spending time with the FBI. . .
+ TrackBacks (0) | Category: Current Events | The Dark Side
The hits just keep on coming. Bristol-Myers Squibb told its employees yesterday that there will be no pay increases this year, and from what I'm hearing, this took a lot of people completely by surprise. As late as last week, there was apparently still discussion of what the salary increases would be for various performance ratings and that sort of thing, but no more.
This sort of thing may or may not be a sign of imminent bad news, but it's never a sign of good news. We'll see what happens as the company continues to deal with the oncoming Plavix patent expiration and other issues. . .
+ TrackBacks (0) | Category: Business and Markets
I'll start a post here so those with details on today's GlaxoSmithKline news can leave comments. I assume we'll be hearing from the UK folks shortly, and the US more in the middle of the day. I also wonder if these announcements will be like the AstraZeneca one earlier - that is, cuts to be staged over a longer period. Those are a mixed bag. They keep people employed longer (and give them some hope that there may be a place to go by the time their position gets cut), but it also spreads Morale-B-Gone dust over a place for an extended time.
Good luck to all concerned.
+ TrackBacks (0) | Category: Business and Markets | Current Events
February 3, 2010
Looking through the latest papers to show up in the Journal of Medicinal Chemistry, this one on BACE-1 inhibitor compounds caught my eye. Perhaps I'm about to be unfair to it. At any rate, I'm going to ask of it something it doesn't provide: data in something that's alive. Doesn't have to be a person, a dog, or even a rat. A cell would do: something with a membrane to cross, with metabolic processes, and with the ability to accept or reject someone's new compound. Enzymes just have to sit there and take whatever you throw at them; living systems fight back.
I sometimes think that we'd be better served if each of the medicinal chemistry journals were split. In J. Med. Chem.'s case, we would then have the Journal of In Vitro Medicinal Chemistry and the Journal of In Vivo Medicinal Chemistry. The criteria for publishing in the two journals would be exactly the same, except to get into the latter one, you would have at least had to have tried your compounds out on something besides an in vitro assay. Doesn't mean that they have to have worked - you just have to have looked.
Although the case of compounds with molecular weights of 900 that have four amides and a sulfonamide in them, and are directed against a target in the central nervous system, might still be a bit of a stretch. I supposed what irritates me about this paper is that it starts off talking about Alzheimer's disease. And that's natural enough in a study dedicated to finding inhibitors of BACE-1, but the problem is, Alzheimer's disease occurs in human beings. And these compounds do not look to have much chance of doing anything inside any human's body. The best I can say for them is that they might give someone else an insight into something that they might be able to do to make something that might have a better chance of working.
Cranky folks like me would probably refer to the latter of my two new journals as just "J. Med. Chem.", and would refer to the former one by a variety of other easy-to-remember names. I offer this suggestion for free to the scientific publishing community, who will, I'm sure, reciprocate with things of equal value.
+ TrackBacks (0) | Category: Alzheimer's Disease | The Central Nervous System | The Scientific Literature
Dimebon (dimebolin) is a perfect example of the black-box nature of drug research for the central nervous system. Any medicinal chemist who looks at its structure would immediately say "CNS", but shrug when asked what specific receptors it might hit. I'd have guessed histamine (correctly), since loratidine used to pay my salary, and I also would have guessed a clutch of 5-HT stuff as well. But it also has activity at AMPA and NMDA glutamate receptors, L-type calcium channels, and more. If you can tell me what it's really doing up there, you shouldn't bother: hang up on me and start calling people with money, because you're ready to take over the CNS therapeutic area for sure.
This blunderbuss is getting a lot of attention these days, since the data for a Phase III trial against Alzheimer's should be available sometime in the spring. The road to that was a strange one. Dimebolin was used for years as an antihistamine in Russia, although I'm not aware if it had any particular reputation for cognitive enhancement in its time as a Soviet allergy pill. It was picked up in screening done during the 1990s at a research institute in the (once secret) military/industrial research city of
Chemogolovka Chernogolovka, about two hours from Moscow. It showed effects on learning in rodent models, and gradually advanced to human trials for Alzheimer's. Impressive data came out in 2008, and Medivation, who own the rights to it here, partnered with Pfizer for development.
Update: the city mentioned above is surely Chernogolovka, but it's interesting that it's appeared many times as Chemogolovka in the English press and literature. I chalk that up to the "rn" looking very much like an "m", and to the mistaken name being semi-plausible in a Stalinist-industrial way, as witness Magnitogorsk. Chernogolovka's much older, though.)
That Bloomberg report I linked to above has a lot of people excited, since there hasn't been a new therapy for Alzheimer's in quite a while (or, arguably, a decent one ever). I don't know what to think, myself. It's absolutely possible that the drug could turn out to have beneficial effects, but it's just as possible that it could miss meeting the high expectations that many investors seem to have for it. (Medivation's stock is up 80% over the last year, for example). A lot of eye-catching numbers from small Phase II trials tend to flatten out in the wider world of Phase III, and if forced, that's the way I'd bet here. (I am most definitely not giving investment advice, though - Alzheimer's drug development is a total crap shoot, and should only be approached with money you can afford to see incinerated).
I hope that Dimebon actually works, though - the world could use something that does. Just don't let anyone convince you that they know how it works, if it makes it through. Unraveling that will take quite a while. . .
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | The Central Nervous System
February 2, 2010
So, now that we're in 2010, the journal that introduced the whole idea of a graphical abstract to organic chemistry (Tetrahedron Letters) has finally started including them in their RSS feed. That's how I read journals these days, and I think I have a lot of company, so I'm grateful that they got around to it.
And on another literature note, I wanted to mention that I've accepted an invite to the editorial advisory board of the new ACS journal ACS Medicinal Chemistry Letters. You can tell, I guess, when you've been doing this stuff for a while - when I look at the rest of the advisory board, I see people I went to grad school with, people I used to work down the hall from, and so on. The journal has started publishing its first papers; we'll see how it works out, and how it competes with the other short-form outlet for this sort of work, Bioorganic and Medicinal Chemistry Letters. I promise not to let any anti-Bredt cyclobutenes get past me!
+ TrackBacks (0) | Category: The Scientific Literature
I kept meaning to write last week about GlaxoSmithKline's decision to open up a database of possible lead compounds against malaria. These were hits from a larger screen that the company ran, and been made unusually public. (Here's the press release as a PDF). There are about 13,500 structures, apparently. The company is to be commended for doing this, naturally, but I wish that the press coverage would emphasize a few things that it hasn't so far.
For one, these are not antimalarial compounds, at least not to a medicinal chemist. Some of them might be, but for now, they're all potential antimalarials, with a long, long way to go. This is all in what most drug discovery organizations call the "hit to lead" stage. Some of these compounds may well be screening artifacts. Others will turn out to work through mechanisms that won't be useful - they'll kill malaria parasites, but they'll kill lots of other things, too. Some of them will hit other targets that aren't quite as severe, but will still be enough to make them undesirabel. And many others will be too weak to be useful as they are, and turn out, after investigation, to have no clear path forward to making them more potent. And so on.
The most interesting compounds still have a long road ahead. What are their blood levels after various sorts of dosing? Which of those dosage forms are the best - the most reliable, the easiest to make, the most stable on storage? What metabolites do the compounds form in vivo, and what do those do? What long-term toxic effects might they have? How susceptible are they to resistance on the part of the parasites? On top of all these questions are the big ones, about how well these potential drugs knock down malaria under real-world conditions.
This, in short, is what drug development is all about, and it would be good to see some of this brought out in the press coverage. This is what I (and many of the readers of this site) do for a living, and it's enough to occupy all our time with plenty left over. If you can do this sort of thing, you're a drug company, and I'm always looking for opportunities to tell people just what it is that drug companies do and to move people past the evil-pharma versus saintly-university mindset. Nature has it right in their editorial:
Meanwhile, universities and other academic institutions should do more to support and reward the sort of translational research required to develop drug leads such as those offered by GSK — even though that work usually does not result in high-profile, breakthrough research papers. In addition, such translational activities provide a means for universities to contribute to public–private partnerships such as the MMV, the Drugs for Neglected Diseases Initiative and the Institute for OneWorld Health.
Universities also have another part to play. Their often aggressive intellectual-property policies can stymie research and development in neglected diseases — they should ensure that their licensing deals with companies make exceptions for royalty-free use of technologies for good causes. That change, too, is beginning to happen — although, when it comes to hogging intellectual property, academics and their institutions are often among the worst offenders. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Infectious Diseases
February 1, 2010
He's back with three more directors nominated for the company's board. There are two Icahnians there already, and this round of proxy voting will say a lot about whether shareholders think that taking Biogen down the road that Imclone went is a good thing or not. Stay tuned. . .
+ TrackBacks (0) | Category: Business and Markets
Over the weekend I received word from several people about impending trouble at GlaxoSmithKline. A big worldwide management hoedown, the "First Line Leader" meeting which was to take place in Atlanta this week, has been abruptly canceled. The company's upper management has informed the erstwhile attendees that since financial results will be announced this Thursday, and since this announcement might have an impact on staffing (might?, they thought it was better that all the reporting teams be together for it.
Several European newspapers have since come out with word that several thousand cuts are going to be announced, but I have no more details on what's going to happen. I suppose we're all going to find out on Thursday, though. Stand by for what's going to seem like a long week if you're at GSK, and best of luck to you if you are.
+ TrackBacks (0) | Category: Business and Markets