About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
November 30, 2010
Andrew Witty of GSK has a one-page essay in The Economist on the problems of the drug industry. None of the background he gives will be news to anyone who reads this site, as you'd imagine - lower rates of success in discovery, higher costs, patent expirations, etc.
Here's his take on research and development:
. . .it is clear that the size of the industry will continue to contract in the drive for efficiency. For some players, more mergers and acquisitions are likely, but others will plan to shrink, and all parts of the value chain from R&D through to production and sales and marketing will be affected. . .
. . .In the past the problem of R&D in big pharmaceutical companies has been “fixed” by spending more and by using scale to “industrialise” the research process. These are no longer solutions: shareholders are not prepared to see more money invested in R&D without tangible success. If anything, based on a rational allocation of capital, R&D should now be consuming less resource.
Yikes. I'm not sure where that last sentence comes from, to be honest with you. Does Witty think that we now know so much about what we're doing that it shouldn't cost so much for us to do it? Or that it shouldn't cost so much to comply with the regulatory authorities, for some reason? I'm a bit baffled, and if someone can explain that "rational allocation" that he speaks of, I'd be grateful.
And I'd like to say that the rest of the piece advances some useful ideas, but I can't do that with a straight face. (To be fair, if Andrew Witty has some great ideas for making GSK more productive, he's most certainly not going to lay them out for everyone in The Economist). So it's all innovative business models, dynamic partnerships, recapturing creative talent in the drug labs, and so on. That last line will no doubt inspire a lot of bitter comment, considering what things have been like at GSK in the last few years.
His main pitch seems to be that drug companies need a "fair reward for innovation", and that's one of those things that's hard to disagree with on the surface. But unpacking it, that's the tough part, because everyone involved will start disagreeing on what's innovative, what might constitute a reward, and (especially) what's fair. Witty has been giving speeches on this for a while now, and I'd say that this latest article is just the condensed version.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
November 29, 2010
Here's an interesting question from a reader in academia. At his institution, they're thinking about rewriting the introductory organic lab syllabus. "Rather than put what the faculty would like to see in it", he writes, "what would your readers like to see in it?"
The questions he raises include these: What organic chemistry lab basics should non-majors be sure to get? And which ones should the chemistry majors have for their advanced courses to build on? What kinds of experiments should be included (and what classics are ready to be dropped?) And which sort of lab curriculum trains people better - the "discovery"-oriented type, or the "cookbook" type?
Add your thoughts in the comments below. I don't know what specific experiments are common in undergraduate labs these days, so I'll let those who are comment on the details. My take on the last question is that the course should probably start in more of a cookbook fashion, to get everyone's fingers wet, but finish up with some sort of parallel-synthesis or method-finding exercise, where everyone gets a chance to do something different and make a small exploration along the way.
+ TrackBacks (0) | Category: Academia (vs. Industry)
November 25, 2010
No blogging until Monday around here - US readers will mostly be taking today off for Thanksgiving, and people around the rest of the world have learned not to expect too much out of America on this day. My chocolate pecan pie preparation worked flawlessly (well, it looks flawless - we'll put it to the test this afternoon), and a home-made pumpkin pie is next to it.
My next job is getting a turkey underway. For some years now, we've cooked a kosher one, because the salt treatment they get really seems to help. (You can brine one at home, if you have space to store the bird, which is a marginal proposition around here most years). Many of the kosher turkeys do need a bit of minor re-plucking before cooking, though, so you have to be ready for that.
We have several traditional side dishes, but add some Iranian rice, like morasa polo, which fits in perfectly. And we have my Iranian mother-in-law's stuffing recipe, an excellent but decidedly non-Iranian mixture involving bread, onions, celery, cranberries and pepperoni. We'll make some mashed potatoes, the kids have requested stuffed mushrooms, I generally make some green beans seasoned with country ham, and we'll pan-roast some Brussels sprouts. My wife makes the cranberry sauce, and we'll have some gravy, too.
So no, I'm not going to be good for much useful work today. And tomorrow I'll eat it all again, so I hope no one's expecting much from me then, either. A happy Thanksgiving to everyone who's celebrating it!
+ TrackBacks (0) | Category: Blog Housekeeping
November 24, 2010
Here's a recipe that I've put up here for the Thanksgiving and holiday season - I'm at home today, and I'm going to be following this exact prep a bit later in the day. I've made it for years this way, as have some friends, so you can consider this the Org Syn procedure for chocolate pecan pie:
Melt 2 squares (2 oz.) baking chocolate (see below if you can't find this) with 3 tablespoons (about 43g) butter in a microwave or double boiler. Combine 1 cup (240 mL) corn syrup (see below if you don't have this) and 3/4 cup sugar (150g) in a saucepan and bring to boil for 2 minutes, then mix the melted chocolate and butter into it. Meanwhile, in a large bowl, beat three eggs, then add the chocolate mixture to them, slowly and with vigorous stirring (you don't want to end up making scrambled eggs with chocolate sauce).
Add one teaspoon (5 mL) of vanilla, and mix in about 1 1/2 cups of broken-up pecans, which I think should be about 150g. You can push that to nearly two cups and still get the whole mixture into a deep-dish pie shell, and I recommend going heavy on the nuts, since the pecan/goop ratio is one thing that distinguishes a home-made pie from some of the abominations that people will sell you. Bake for about 40 to 45 minutes at 375 F (190C), and let cool completely before you attack it. Note that this product has an extremely high energy density - it's not shock-sensitive or anything, but I wouldn't want to see what it would do to a calorimeter.
Note for non-US readers: the baking chocolate can be replaced by 40 grams of cocoa powder (not the Dutch-processed kind) with 28 grams of some sort of shortening (unsalted butter, vegetable shortening, oil, etc.) If you don't have corn syrup, then just use a total of 350g white sugar instead, and add 60 mL water to the recipe.
+ TrackBacks (0) | Category: Blog Housekeeping
November 23, 2010
We talked a little while back here about "Lean Six Sigma" as applied to drug discovery organizations, and I notice that the AstraZeneca team is back with another paper on the subject. This one, also from Drug Discovery Today, at least doesn't have eleventeen co-authors. It also addresses the possibility that not everyone in the research labs might welcome the prospect of a business-theory-led revolution in the way that they work, and discusses potential pitfalls.
But I'm not going to discuss them here, at least not today. Because this reminds me of the post last week about the Novartis "Lab of the Future" project, and of plenty of other initiatives, proposals, alliances, projects, and ideas that are floating around this industry. Here's what they have in common: they're all distractions.
Look, no one can deny that this industry has some real problems. We're still making money, to be sure, but the future of our business model is very much in doubt. And those doubts come from both ends of the business - we're not sure that we're going to be able to get the prices that we've been counting on once we have something to sell, and we're not sure that we're going to have enough things to sell in the first place. (There, that summarized about two hundred op-ed pieces, some of them mine, in one sentence. Good thing that I'm not paid by the word for this blog.) These problems are quite real - we're not hallucinating here - and we're going to have to deal with them one way or another. Or they're going to deal with us, but good.
I just don't think that tweaking the way that we do things will be enough. We're not going to do it by laying out the labs differently, or putting different slogans up on the walls, or trying schemes that promise to make the chemists 7.03% more productive or reduce downtime in the screening group by 0.65 assays/month. This is usually where people trot out that line about rearranging deck chairs on the Titanic, but the difference is, we don't have to sink. The longer things go on, though, the more I worry that incremental improvements aren't going to bail us out.
This is a bit of a reversal for me. I've said for several years that the low success rates in the industry mean that we don't necessarily have to make some huge advance. After all, if we made it up to just 80% failure in the clinic, that would double the number of drugs reaching the market. That's still true - but the problem is, I don't see any signs of that happening. If success rates are improving anywhere, up and down the whole process from target selection to Phase III, it's sure not obvious from the data we have.
What worries me is that the time spent on less disruptive (but more bearable) solutions may be taking away from the time that needs to be spent on the bigger changes. I mean, honestly, raise your hands: who out there thinks that "Lean Six Sigma" is the answer to the drug industry's woes? Right. Not even all the consultants selling this stuff could get that one out with a straight face. "But it'll help!" comes the cry, "and it's better than doing nothing!". Well, in the short term, that may be true, although I'm not sure if there is a "short term" with some of these things. If it gives managers and investors the illusion that things are really being fixed, though, and if it takes mental and physical resources away from fixing them, then it's actually harmful.
What would it take to really fix things? Everyone knows - really, everyone does. Some combination of progress on the following questions would do just fine:
1. A clear-eyed look at target-based drug design, by which I mean, whether we should be doing it at all. More and more, I worry that it's been a terrible detour for the whole project of pharmaceutical research. There have been successes, of course, but man, look at the failures. And the number of tractable targets (never high) is lower than ever, as far as I can tell. If we're going to do it, though, we need. . .
2. The ability to work on harder target classes. The good ol' GPCRs and the easy-to-inhibit enzyme classes are still out there, and still have life in them, but the good ideas are getting thinner. But there are plenty of tougher mechanisms (chief among them protein-protein interactions) that have a lot of ideas running around looking for believable chemical matter. Making some across-the-board progress in those areas would be a huge help, but it would avail us not without. . .
3. Better selection of targets. Too many compounds fail in the clinic because of efficacy, which means that we didn't know enough about the biology going in. Most of our models of disease have severe limitations, and in many cases, we don't even know what some of those limitations are until we step into them. Maybe we can't know enough in many cases, so we need. . .
4. More meaningful clinical trials. And by that I mean, "for a given cost", because these multi-thousand-people multi-year things, which you need for areas like cardiovascular, Alzheimer's, osteoporosis, and so on, are killing us. We've got a terrible combination of huge potential markets in areas where we hardly know what we're doing. And that leads to gigantic, expensive failures. Could they somehow be less expensive? One way would be. . .
5. A better - and that means earlier - handle on human tox. I don't know how to do this one, either, but there are billions of dollars waiting for you if you can. Efficacy is the big killer in the late clinic these days, but that and toxicity put together account for a solid majority of the failures all the way through. (The rest are things like "Oops, maybe we should sell this program off" kinds of decisions).
There are plenty of others, but I think that improvements in those would fix things up just fine. Don't you? And maybe I'm just slow-witted, but I can't see how changing the way the desks face, or swapping out all the business cards for new titles, or realigning the therapeutic area teams - again - are going to accomplish any of it. At best, these things will make the current process run a bit better, which might buy us some more time before we have to confront the big stuff anyway. At worst, they'll accomplish nothing at all, but just give the illusion that something's being done.
To be fair, there are some initiatives around the industry that address these (and the other) huge problems. As I said, it's not like no one knows what they are. And to be fair, these really are difficult things to fix. Saying that you want to get a better early read on human tox in the clinic, the way I just did so blithely, is easy - actually doing something about it, or even finding a good place to start doing something about it, is brutally hard. But it's not going to be as brutal as what's been happening to us the last few years, or what's we're headed for if we don't get cracking.
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development | Drug Industry History
November 22, 2010
Since we were talking about worldwide scientific productivity here the other day, this article in The Economist is timely. They're talking about the share of worldwide R&D (and papers published) by country, and pointing out that the longtime order seems to be changing.
For sheer scientific publications,that order is, of course, the US and Western Europe, followed distantly by everyone else. I've reproduced two graphs from the article, atrocious color schemes and all, and you can see how large the gap has been in the published-paper count.. But there are several interesting features. Note how back in the early 1980s, Russia and Japan were quite similar, but the old Soviet Union (and its successor Russian state) was on the decline even then. Meanwhile, China has come up from nowhere to overtake even Japan. India, South Korea, and Brazil are down in the single digits.
But that brings up some other questions. Take a look at the second graph, on R&D spending as a % of GDP. (This is over a shorter time scale than the paper graph, so adjust your perspective accordingly). Note that Japan has been leading the way here, with South Korea catching up. Neither of them (especially South Korea) publish as much, though, as you'd think, given this investment - is the rest of it going into patents? Or staying inside the labs? Looked at another way, though, the EU is publishing even more than you'd think, given their R&D spending.
You'll see that China is coming up in the spending world, although they're not rising as steeply as South Korea (no one is). India's pretty flat, though, and are being outspent, on this basis, by Brazil. (I hope I'm reading the various shades of aquamarine, teal, and blue-green correctly - you know, the Economist used to be good at presenting information graphically, but whoever let this one through should be whacked on the head).
Neither of these measures is an end in itself. I'd say that robust R&D spending is necessary (but not sufficient) for a country to produce good results. And there are probably a lot of different ways to count things as R&D or not, which we aren't seeing here. As for publications, they're an even rougher measure, since different countries have different cultures (and incentives) for this sort of thing. (Don't forget language barriers, either). And as everyone knows, there are papers and there are papers. Long lists of junk that no one ever reads would be one way to inflate things, but to what good?
+ TrackBacks (0) | Category: The Scientific Literature
November 19, 2010
Four years after the torcetrapib disaster, Merck has released some new clinical trial data on their own CETP inhibitor, anacetrapib. It's doing what it's supposed to, when added to a statin regimen: decrease LDL even more, and strongly raise HDL.
So that's good news. . .but it would actually be quite surprising if these numbers hadn't come out that way. Pfizer's compound had already proven the CETP mechanism; their compound did the same thing at this stage of the game. The problems came later, and how. And that's where the worrying kicks in.
As far as I know, no one is still quite sure why torcetrapib actually raised the death rate slightly in its phase III treatment group. One possible mechanism was elevated blood pressure (part of a general off-target effect on the adrenals) and Merck saw no sign of that. But no matter what, we're going to have to wait for a big Phase III trial, measuring real-world cardiovascular outcomes, to know if this drug is going to fly, and we're not going to see that until 2015 at the earliest. Well, unless there's unexpected bad news at the interim - that, we'll see.
I hope it doesn't happen. If the whole LDL-bad HDL-good hypothesis is correct, you'd think that a CETP inhibitor would show a strong beneficial effect. This compound is either going to help a lot of people, or it's going to tell us something really significant that we didn't know about human lipid handling (and/or CETP). Problem is, telling us something new is almost certainly going to be the same as telling us something bad. It's still going to be a long road in this area, and good luck to everyone involved. . .
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
Here's a look (PDF) at the Novartis "Labs of the Future". This looks like another one of these "open lab" concepts, and it appears that Basel has really bought into the idea. The interview with the two biologists helping to head the project is. . .well, it's very Swiss, that's the best description. When asked "What indicators would you select to measure improvement?" after people move in, the answer is:
Bouwmeester: That depends on the monitoring period. I am assuming that one or two months after we move into the building, the employees will already be experiencing a new dynamic. If they report a positive difference, that will be a first measure of success. It is important that the people in the LOTF develop some
kind of ownership regarding their role in the building. It will be a more active role than usual. The LOTF is basically an open space where you can observe your peers across the hierarchies. This is a different type of social architecture compared to 10 years ago, or even today still. Everybody will be more under observation and observing more than before. The dynamism of the interaction between people will increase. The employees themselves will have to decide what is common practice on their floor. Of course the concept will need to be adapted over time; I would be surprised
if all concepts materialize exactly as anticipated.
I would be, too. In fact, I'd like to propose that last sentence be printed up on T-shirts, coffee mugs, and posters, but that's probably not going to happen. More on what this is actually going to look like:
Korthäuer: (There will be) big screens, placed where people typically pass by. There will be video cameras installed on each laptop, allowing easy and informal contacts. The information technology concept is an important part of it. We have also designed special furniture that serves the same goal. We want to get rid of functional cells such as coffee rooms, writing rooms, lab rooms etc. Our aim is to bring the walls down. But of course we still need differentiation. In the LOTF
there are still compartmentalized areas with particular qualities, constructed according to people’s needs and workflow requirements. . .
Korthäuer: On the information technology side, we are trying to implement a few applications which really support the concept. There will be videoconferences, ‘smart’ whiteboards that allow notes to be captured electronically. We focus on proven technologies. Gradually, we will bring new technologies into the building such as haptic interfaces. Removing the walls in a building can bring about big changes. There will be much less storage room. Therefore, a little robot operating in an elevator shaft will transport materials ordered by laptop up from the basement to the floors. . .
Bouwmeester: We have already implemented Virtual Reality Rooms with ultra-high-resolution video screens, so that the quality is as though you were in a real-life meeting; the effect is quite spectacular. As with any global project, it will all require a certain attitude, a set of skills that people will have to develop. Much energy and time will be needed to communicate efficiently between places as different as Basel, Cambridge and Shanghai. . .
Allow me to note a few difficulties. For one, those high-res video conference venues still have to deal with switching and transmission delays, especially across the distances that the Novartis guys are talking about. So if you try to have a spirited real-time discussion, you'll mostly be getting very clear, sharp, high-fidelity views of people interrupting each other and pausing awkwardly. (Update: see the comments section - some users are reporting more successful experience.) I have a more macro-scale worry about this sort of thing, too, having to do with my suspicion of plans that depend on people finally shaping up and acting the way that they're supposed to. As with any global project, y'know.
I think that I'll let Tom Wolfe have the last word here, since what he wrote in From Bauhaus to Our House is still applicable, over thirty years later:
I once saw the owners of such a place driven to the edge of sensory deprivation by the whiteness & lightness & leanness & cleanness & bareness & spareness of it all. They became desperate for an antidote, such as coziness & color. They tried to bury the obligatory white sofas under Thai-silk throw pillows of every rebellious, iridescent shade of magenta, pink, and tropical green imaginable. But the architect returned, as he always does, like the conscience of a Calvinist, and he lectured them and hectored them and chucked the shimmering little sweet things out.
Every great law firm in New York moves without a sputter of protest into a glass-box office building with concrete slab floors and seven-foot-ten-inch-high concrete slab ceilings and plasterboard walls and pygmy corridors. . .Without a peep they move in!—even though the glass box appalls them all. . .
I find the relation of the architect to the client in America today wonderfully eccentric, bordering on the perverse. . .after 1945 our plutocrats, bureaucrats, board chairmen, CEO's, commissioners, and college presidents undergo an inexplicable change. They become diffident and reticent. All at once they are willing to accept that glass of ice water in the face, that bracing slap across the mouth, that reprimand for the fat on one's bourgeois soul, known as modern architecture.
And why? They can't tell you. They look up at the barefaced buildings they have bought, those great hulking structures they hate so thoroughly, and they can't figure it out themselves. It makes their heads hurt.
+ TrackBacks (0) | Category: Life in the Drug Labs
November 18, 2010
The FDA has approved Eisai's Halaven (eribulin) for late-stage breast cancer. As far as I can tell, this is now the most synthetically complex non-peptide drug ever marketed. Some news stories on it are saying that it's from a marine sponge, but that was just the beginning. This structure has to be made from the ground up; there's no way you're going to get enough material from marine sponges to market a drug.
If anyone has another candidate, please note it in the comments - but I'll be surprised if there's anything that can surpass this one. There have been long syntheses in the industry before, of course, although we do everything we can to avoid them. Back when hydrocortisone was first marketed by Merck, it had a brutal synthetic path for its time. (That's where a famous story about Max Tishler came from - one of the intermediates was a brightly colored dinitrophenylhydrazone. Tishler, it's said, came into the labs one day, saw some of the red solution spilled on the floor, and growled "That better be blood") And Roche's Fuzeon is a very complicated synthesis indeed, but much of that is repetitive (and automated) peptide coupling. It took a lot of work to get right, but I'd still give the nod to eribulin. Can anyone beat it?
+ TrackBacks (0) | Category: Cancer | Chemical News | Drug Industry History
November 17, 2010
So Roche is (as long rumored) going through with a 6% headcount reduction, worldwide. That's bad news, but not unexpected bad news, and it certainly doesn't make them stand out from the rest of big pharma. This sort of headline has been relentlessly applicable for several years now.
What surprised me was their announcement that they're giving up on RNA interference as a drug mechanism. That's the biggest vote of no-confidence yet for RNAi, which has been a subject of great interest (and a lot of breathless hype) for some years now. (There's been a lot of discussion around here about the balance between those two).
That's not the sort of news that the smaller companies in this space needed. Alnylam, considered the leader in the field, already had over $300 million from Roche (back in 2007), but so much for anything more. The company is already putting on a brave face. It has not been a good fall season: they were already having to cut back after Novartis recently thanked them for their five-year deal, shook their hand, and left. To be sure, Novartis said that they're going to continue to develop the targets from the collaboration, and would pay milestones to Alnylam as any of them progress - but they apparently didn't feel as if they needed Alnylam around while they did so.
Then there's Tekmira, who had a deal with Roche for nanoparticle RNAi delivery. They're out with a statement this morning, too, saying (correctly) that they have other deals which are still alive. But there's no way around the fact that this is bad news.
What we don't know is what's going on in the other large companies (the Mercks, Pfizers, and so on) who have been helping to fund a lot of this work. Are they wondering what in the world Roche is up to? Looking at it as a market opportunity, and glad to see less competition? Or wishing that they could do the same thing?
+ TrackBacks (0) | Category: Biological News | Business and Markets
Update: Richard Van Noorden at Nature runs the numbers on this paper, and comes to the same conclusions: you can't use it to say that US papers are more prone to fraud. . .
There's a new paper in the Journal of Medical Ethics looking at fraudulent publications. The author went back and studied the papers in the PubMed database that have been retracted during the last ten years (there are 788 of them), looking for trends. And so far, the press coverage that I've seen of his conclusions seems to be missing the point.
One thing that stands out is that retracted papers tend to come from higher-profile journals. That makes sense to me, because that's (a) where they're more likely to be noticed, and (b) where the editors are more likely to care. As you move down the list, people just seem to start shrugging their shoulders. Has a paper ever been retracted from Bioorganic and Medicinal Chemistry Letters, for example? I can't think of any, and readers are invited to try to find one themselves.
Another conclusion is that retracted papers tend to come from serial offenders. Over half the retracted papers had a first author who had retracted something else. I think that probably represents cases where someone had published a body of fraudulent work which all got exposed at once, but if that's not a serial offender, I don't know what is.
But a third conclusion is the one getting the headline writers going. The paper examines the retractions, looking for whether they were withdrawn for error or for fraud. The US is notable for having more retractions in the latter category, as compared to the rest of the world: one third of its 260 retracted papers are attributed to fraud, while the rest of the world comes in between 20% and 25%. So you get things like "US Scientists More Likely To Commit Fraud". But no, for that conclusion to be valid, you'd want to know how many fraudulent papers were published as a percentage of the whole output.
Even that wouldn't tell you the whole story. Remember, we're looking at papers that have actually been retracted. Most published fraudulent research never gets to that point. Lower-end journals have a terrible problem with plagiarized, derivative junk. Piles of it gets sent to them, then too much of it gets into print, and it just sits there, with no one ever paying any attention. Well, years later, some poor person might try to reproduce a prep, find that it doesn't work, sigh, and try something else, but otherwise. . .
No, in the same way that the prominent journals are over-represented, I think that the US might be a bit over-represented because slightly more fraud gets caught. More US papers appear in prominent journals than average, and fewer appear in the absolute bottom-rung journals. The United States accounts for at least one-third of the total scientific paper output, so those 260 papers out of 788 are exactly what you'd expect if all other things were equal. But other things aren't equal. We may have more prominent (and more harmful) frauds here, but I'd be willing to bet that as a proportion of the whole, we have fewer of them.
One last note: the figures in this paper seem to conflict with an earlier analysis, which seems to have found fewer retracted papers in PubMed, and a higher proportion of them tainted by fraud. Who's right?
+ TrackBacks (0) | Category: The Scientific Literature
November 15, 2010
Tetrazole derivatives have featured several times here in "Things I Won't Work With", which might give you the impression that they're invariably explosive. Not so - most of them are perfectly reasonable things. A tetrazole-for-carboxyl switch is one of the standard med-chem tricks, standard enough to have appeared in several marketed drugs. And that should be recommendation enough, since the FDA takes a dim view of exploding pharmaceuticals (nitroglycerine notwithstanding; that one was grandfathered in). No, tetrazoles are good citizens. Most of the time.
It's when they get put in with the wrong sort of company that they turn delinquent. What with four nitrogens in the ring and only one carbon, they do have a family history of possible trouble - several sections of this blog category could just as accurately be called Things That Suddenly Want To Turn Back Into Elemental Nitrogen. And thermodynamically, there aren't many gently sloping paths down to nitrogen gas, unfortunately. Both enthalpy and entropy tilt things pretty sharply. A molecule may be tamed because it just can't find a way down the big slide, but if it can, well, it's time to put on the armor, insert the earplugs, and get ready to watch the free energy equation do its thing right in front of your eyes. Your heavily shielded eyes, that is, if you have any sense at all.
Nitro groups are just the kind of bad company I mean, since they both bring their own oxygens to the party and pull electrons around in delightfully destabilizing ways. So nitrotetrazole is already not something I'd feel good about handling (its metal salts are primary explosives), but today's paper goes a step further and makes an N-oxide out of a nitrogen on a nitrotetrazole ring. This both adds more oxygen and tends to make the crystal packing tighter, which raises the all-important kapow/gram ratio. (There is, of course, little reason to do this unless you feel that life is empty without sudden loud noises). The paper mentions that "Introducing N-oxides onto the tetrazole ring may . . . push the limits of well-explored tetrazole chemistry into a new, unexplored, dimension.", but (of more immediate importance) it may also push pieces of your lab equipment into unexplored parts of the far wall.
Turns out that you can make the N-oxides through pretty mild chemistry (oxone at room temp), which is surprising. Until now, only a handful of the things had been made, most using an indirect route using hydrazoic acid (next!). The only direct oxidation of a tetrazole had been done with the relentlessly foul hypofluorous acid (next! keep moving!), which itself has to be made fresh from fluorine gas (next! thank you! next!). These recipes pretty much excluded most reasonable people from skipping through this green and sunny field of knowledge. Still, if you're a reasonable person, you're probably not yearning to make nitrotetrazole oxides in the first place. These things have a way of evening out.
So what are these fine new heterocycles like, anyway? Well, the authors prepared a whole series of salts of the parent compound, using the bangiest counterions they could think of. And it makes for quite a party tray: the resulting compounds range from the merely explosive (the guanidinium salts) to the very explosive indeed (the ammonium and hydroxyammonium ones). They checked out the thermal stabilities with a differential scanning calorimeter (DSC), and the that latter two blew up so violently that they ruptured the pans on the apparatus while testing 1.5 milligrams of sample. No, I'm going to have to reluctantly give this class of compounds a miss, as appealing as they do sound.
Several of the new compounds show similar detonation properties to RDX, albeit with less thermal stability. This is one reason we're reading about them in the open literature; the ideal explosive acts incredibly stable under a wide range of conditions, then loses its composure all at once at just the specified moment. We don't seem to be quite there yet. What I expect is that the authors are probably trying to work this same N-oxide magic on the azidotetrazolates instead of the nitro compounds. Now, that'll be a hoppin' bunch of compounds - for all I know, the research groups involved have already tried this and just haven't been able to get anything out past the lip of the flask yet. I'll be monitoring the literature for signs. On the other hand, if you live in Münich or College Park, you can probably monitor the progress of this work by listening for distant booming noises and the tinkle of glass.
+ TrackBacks (0) | Category: Things I Won't Work With
I (and other chemists) have been talking for years about the connections between organic chemistry and cooking. The usual saying is that you should never trust the lab work of an organic chemist who's hopeless in the kitchen. I agree with that one - I've known good chemists who don't cook (among them, a colleague in grad school who used his oven as a filing cabinet), but I don't think I've ever known one who can't.
Well, the techno-culinary fashion in recent years is blurring the line even more, from the other direction. Check out this web site from the Kohler people, makers of sinks, faucets, and the like. Vacuum apparatus is available for you to experiment with sous vide techniques at home - and if you scroll down, the crossover is complete. Yep, there's a rota-vap, right out of the lab and ready for the kitchen counter. I've always wondered if those would be good for reducing a sauce, and now, well, we're going to find out. . .
+ TrackBacks (0) | Category: General Scientific News
November 12, 2010
Here's an attention-getting paper from Tomas Hudlicky (and his co-author Martina Wernerova), and I'd like to help it get some more. It begins:
One who has been reading the literature concerned with organic synthesis in recent years, be it methodology, catalysis, or total synthesis of natural products, may have noticed considerable inflation in the values reported for isolated product yields, ratios of diastereomers, and enantiomeric excess values. A comparison of papers published during the period 1955 to 1980 with those published between 1980 and 2005 reveals that those from the more recent period frequently report isolated product yields of reactions >95%. Such large values were rarely found in the older literature and are all but absent in Organic Syntheses, a journal that only publishes procedures that have been independently reproduced. . .
There, does that sound like the chemical literature you know? Just a bit? Hudlicky has tackled this issue before, and the reasons he advances for the problem remain the same: pressure to make your methods stand out (to the scientific community, to the journal editors, to the granting agencies), a decrease in scale in reactions (making accuracy and precision more difficult), and, finally, what he refers to as "deliberate adjustment". That's well put; the rest of us know it as fraud.
He identifies the mid-1980s as roughly the period when things really started to go to pieces, saying that most procedures in reputable journals before that era are reproducible by, as they say, one skilled in the art, while the numbers have been decreasing since then. And he puts some numbers on the problem, performing a series of test experiments with extremely careful weighing and analysis.
These confirm what every working organic chemist knows: the more manipulations, the more sample you lose. Filtration through a plug of silica gel, into one flask, can give you pretty much complete recovery. But if you cut fractions, you're going to lose about 1%. And if you have to do a separation, even between two widely separated compounds on silica, you're going to lose about 2%. So people who report a >98% yield after chromatography from a real-world crude mixture are kidding themselves. The same goes for extractions and other common methods. In general, every manipulation of a reaction is going to cost you 1 to 2% of your material, even with careful technique. Hudlicky again:
Given that most academic groups do not subject day-to-day reactions to serious optimization or matrix-optimization  as is done in industry, it is reasonable to assume that the vast majority of the reactions reported in the literature do not proceed with quantitative conversions. Such aspect would approximate our experiments with mixtures of pure compounds. Because a minimum of three operations (extraction, filtration, and evaporation) is required in working up most reactions, we conclude that yields higher than ca. 94% obtained by work-up and chromatography of crude reaction mixtures are likely unrealistic and erroneous in nature. Such values may arise as a direct consequence of not following correct protocols, which would be expected in the fast-paced academic environment. (An astute student of the organic literature may discover that this very author has been guilty of reporting yields in this range from time to time!)
He goes on to detail the limits of error in weighing, which depend greatly on the amount of sample and the size of the flask. (The smaller the sample-to-container ratio, the worse things get, as you'd figure). And he turns to analyzing mixures of diastereomers by NMR, LC, and the like. As it turns out, NMR is an excellent way to determine these up to about a ratio of 95:5 , but past that, things get tricky. And "past that" is just where a lot of papers go these days, with a precision that is often completely spurious.
Here's the bottom line:
The conclusion drawn from this set of experiments points to the prevalence of serious discrepancies in the reporting of values for yields and ratios in the current literature. We have demonstrated that the facilities and equipment available in a typical academic laboratory are not adequate to support the accuracy of claims frequently made in the literature. . .The current practice of reporting unrealistically high isolated product yields and stereoisomer ratios creates serious problems in reproducibility and hence leads to diminished credibility of the authors.
He recommends a rigorous disclosure of the spread of product yields over multiple experiments, calibration of LC and GC apparatus, or (failing that) at least admitting that no such analysis has been done. (He also recommends getting rid of the concepts of diastereomeric and enantiomeric excess, in line with my fellow Arkansan Robert Gawley's advice). But I think that these ideas, while perfectly reasonable, don't get at the underlying problems - the inflationary pressure to produce more and more noteworthy results. Hudlicky's rules should be adopted - but I fear that they might just push the self-deception (and outright fraud) into newer territories.
I'm glad he's published this paper, though. Because everyone knows that this is a real problem - we complain about it, we joke about it, we mutter and we grit our teeth. But "officially", in the published literature, it's never mentioned. Let's stop pretending, shall we?
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Back in January, I wrote about the controversial "Reactome" paper that had appeared in Science. This is the one that claimed to have immobilized over 1600 different kinds of biomolecules onto nanoparticles, and then used chemical means to set off a fluorescence assay when any protein recognized them. When actual organic chemists got a look at their scheme - something that apparently never happened during the review process - flags went up. As shown in that January post (and all over the chemical blogging world), the actual reactions looked, well, otherwordly.
Science was already backtracking within the first couple of months, and back in the summer, an institutional committee recommended that it be withdrawn. Since then, people have been waiting for the thunk of another shoe dropping, and now it's landed: the entire paper has been retracted. (More at C&E News). The lead author, though, tells Nature that other people have been using his methods, as described, and that he's still going to clear everything up.
I'm not sure how that's going to happen, but I'll be interested to see the attempt being made. The organic chemistry in the original paper was truly weird (and truly unworkable), and the whole concept of being able to whip up some complicated reactions schemes in the presence of a huge number of varied (and unprotected) molecules didn't make sense. The whole thing sounded like a particularly arrogant molecular biologist's idea of how synthetic chemistry should work: do it like a real biologist does! Sweeping boldly across the protein landscape, you just make them all work at the same time - haven't you chemists every heard of microarrays? Of proteomics? Why won't you people get with the times?
And the sorts of things that do work in modern biology would almost make you believe in that approach, until you look closely. Modern biology depends, though, on a wonderful legacy, a set of incredible tools bequeathed to us by billions of years of the most brutal product-development cycles imaginable (work or quite literally die). Organic chemistry, though, had no Aladdin's cave of enzymes and exquisitely adapted chemistries to stumble into. We've had to work everything out ourselves. And although we've gotten pretty good at it, the actions of something like RNA polymerase still look like the works of angels in comparison.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
November 11, 2010
I've been reading an interesting new paper from Stuart Schreiber's research group(s) in PNAS. But I'm not sure if the authors and I would agree on the reasons that it's interesting.
This is another in the series that Schreiber has been writing on high-throughput screening and diversity-oriented synthesis (DOS). As mentioned here before, I have trouble getting my head around the whole DOS concept, so perhaps that's the root of my problems with this latest paper. In many ways, it's a companion to one that was published earlier this year in JACS. In that paper, he made the case that natural products aren't quite the right fit for drug screening, which fit with an earlier paper that made a similar claim for small-molecule collections. Natural products, the JACS paper said, were too optimized by evolution to hit targets that we don't want, while small molecules are too simple to hit a lot of the targets that we do. Now comes the latest pitch.
In this PNAS paper, Schreiber's crew takes three compound collections: 6,152 small commercial molecules, 2,477 natural products, and 6,623 from academic synthetic chemistry (with a preponderance of DOS compounds), for a total of 15, 252. They run all of these past a set of 100 proteins using their small-molecule microarray screening method, and look for trends in coverage and specificity. What they found, after getting rid of various artifacts, was that about 3400 compounds hit at least one protein (and if you're screening 100 proteins, that's a perfectly reasonable result). But, naturally, these hits weren't distributed evenly among the three compound collections. 26% of the academic compounds were hits, and 23% of the commercial set, but only 13% of the natural products.
Looking at specificity, it appears that the commercial compounds were more likely, when they hit, to hit six or more different proteins in the set, and the natural products the least. Looking at it in terms of compounds that hit only one or two targets gave a similar distribution - in each case, the DOS compounds were intermediate, and that turns out to be a theme of the whole paper. They analyzed the three compound collections for structural features, specifically their stereochemical complexity (chiral carbons as a per cent of all carbons) and shape complexity (sp3 carbons as a percent of the whole). And that showed that the commercial set was biased towards the flat, achiral side of things, while the natural products were the other way around, tilted toward the complex, multiple-chiral-center end. The DOS-centric screening set was right in the middle.
The take-home, then, is similar to the other papers mentioned above: small molecule collections are inadequate, natural product collections are inadequate: therefore, you need diversity-oriented synthesis compounds, which are just right. I'll let Schreiber sum up his own case:
. . .Both protein-binding frequencies and selectivities are increased among compounds having: (i) increased content of sp3-hybridized atoms relative to commercial compounds, and (ii) intermediate frequency of stereogenic elements relative to commercial (low frequency) and natural (high frequency) compounds. Encouragingly, these favorable structural features are increasingly accessible using modern advances in the methods of organic synthesis and commonly targeted by academic organic chemists as judged by the compounds used in this study that were contributed by members of this community. On the other hand, these features are notably deficient in members of compound collections currently widely used in probe- and drug-discovery efforts.
But something struck me while reading all this. The two metrics used to characterize these compound collections are fine, but they're also two that would be expected to distinguish them thoroughly - after all, natural products do indeed have a lot of chiral carbons, and run-of-the-mill commercial screening sets do indeed have a lot of aryl rings in them. There were several other properties that weren't mentioned at all, so I downloaded the compound set from the paper's supporting information and ran it through some in-house software that we use to break down such things.
I can't imagine, for example, evaluating a compound collection without taking a look at the molecular weights. Here's that graph - the X axis is the compound number, Y-axis is weight in Daltons:
The three different collections show up very well this way, too. The commercial compounds (almost every one under 500 MW) are on the left. Then you have that break of natural products in the middle, with some real whoppers. And after that, you have the various DOS libraries, which were apparently entered in batches, which makes things convenient.
Notice, for example that block of them standing up around 15,000 - that turns out to be the compounds from this 2004 Schreiber paper, which are a bunch of gigantic spirooxindole derivatives. In this paper, they found that this particular set was an outlier in the academic collection, with a lot more binding promiscuity than the rest of the set (and they went so far as to analyze the set with and without it included). The earlier paper, though, makes the case for these compounds as new probes of cellular pathways, but if they hit across so many proteins at the same time, you have to wonder how such assays can be interpreted. The experiments behind these two papers seem to have been run in the wrong order.
Note, also, that the commercial set includes a lot of small compounds, even many below 250 MW. This is down in the fragment screening range, for sure, and the whole point of looking at compounds of that molecular weight is that you'll always find something that binds to some degree. Downgrading the commercial set for promiscuous binding when you set the cutoffs that low isn't a fair complaint, especially when you consider that the DOS compounds have a much lower proportion of compounds in that range. Run a commercial/natural product/DOS comparison controlled for molecular weight, and we can talk.
I also can't imagine looking over a collection and not checking logP, but that's not in the paper, either. But here you are:
In this case, the natural products (around compound ID 7500) are much less obvious, but you can certainly see the different chemical classes standing out in the DOS set. Note, though, that those compounds explore high-logP regions that the other sets don't really touch.
How about polar surface area? Now the natural products really show their true character - looking over the structures, that's because there are an awful lot of polysaccharide-containing things in there, which will run your PSA up faster than anything:
And again, you can see the different libraries in the DOS set very clearly.
So there are a lot of other ways to distinguish these compounds, ways that (to be frank) are probably much more relevant to their biological activity. Just the molecular-weight one is a deal-breaker for me, I'm afraid. And that's before I start looking at the structures in the three collections at all. Now, that's another story.
I have to say, from my own biased viewpoint, I wouldn't pay money for any of the three collections. The natural product one, as mentioned, goes too high in molecular weight and is too polar for my tastes. I'd consider it for antibiotic drug discovery, but with gritted teeth. The commercial set can't make up its mind if it's a fragment collection or not. There are a bunch of compounds that are too small even for my tastes in fragments - 4-methylpyridine, for example. And there are a lot of ugly functional groups: imines of beta-napthylamine, which should not even get near the front door (unstable fluorescent compounds that break down to a known carcinogen? Return to sender). There are hydroxylamines, peroxides, thioureas, all kinds of things that I would just rather not spend my time on.
And what of the DOS collection? Well, to be fair, not all of it is DOS - there are a few compounds in there that I can't figure out, like isoquinoline, which you can buy from the catalog. But the great majority are indeed diversity-oriented, and (to my mind), diversity-oriented to a fault. The spirooxindole library is probably the worst - you should see the number of aryl rings decorating some of those things; it's like a fever dream - but they're not the only offenders in the "Let's just hang as many big things as we can off this sucker" category. Now, there are some interesting and reasonable DOS compounds in there, too, but there are also more endoperoxides and such. (And yes, I know that there are drug structures with endoperoxides in them, but damned few of them, and art is long while life is short). So no, I wouldn't have bought this set for screening, either; I'd have cherry-picked about 15 or 20% of it.
Summary of this long-winded post? I hate to say it, but I think this paper has its thumb on the scale. I'm just around the corner from the Broad Institute, though, so maybe a rock will come through my window this afternoon. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Natural Products
From reader Jose, in the comments thread to the most recent post:
"Published I find it ironic that so many pharma sites who hired hotshot architects to design labspaces that foster as much personal interaction as possible, are now pumping the virtues of collaborations across 10 time zones."
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History | Life in the Drug Labs
November 10, 2010
A reader from a large company sends this along - it's the text of a letter that he's wanted to send to C&E News, but since, as he puts it, "they don't publish anonymous letters and I still need to work", he decided that it would never see the light of day. I offered to help him out with that.
I've written many times on this blog about outsourcing, mainly on the theme of "it isn't going away, so we're going to have to learn to deal with it". And I've seen companies use it well, but there's no doubt that there are companies that are either (a) using it poorly, or (b) taking the idea further than it can go. Outsourcing to a cheaper country is not a magic wand, for sure - the problem is, perhaps, that to an accountant it might look like one. At any rate, here's the letter.
In a recent edition (25th Oct 2010 “The Grand Experiment”) you state that Merck &Co targets 25% external R&D and that AstraZeneca is striving for 40%. I recently talked to all the project managers which oversee our current collaborations. The stories of naivety, incompetence and missed deadlines by the outsource companies were legion. The managers I talked to mostly used in-house resource and expertise to paper over the cracks. Why?
When asked whether they had reported these problems up the chain of command, the answer was always no. The reasons?
1 “If we have four collaborations and mine is the only one reporting problems, which three project managers do you think will get a bonus?”
2 “They won’t believe me, they will just think I am trying to protect jobs here”.
3. “You can’t swim against the tide”.
4 “When it goes bad here, I might be able to get a job with the collaborator”.
5 “My next job will be outside chemistry as a project manager. The last thing I need is any negative vibes around this collaboration”.
6. “I want to be the out-sourcing manager when that is all that there is left here. Do you think I want any trouble to become visible”
So, as far as senior management know, it is all going very well.
Unfortunately I can’t attach my name and organization. I need a job too and telling the truth is not always that popular, as many out-sourcing managers will have experienced. . .
These are valid points, and any company that is using (or thinking of using) a significant amount of outsourcing should pay attention. Just as with internal efforts, Something Upper Management Wants can too easily turn into Something Upper Management Is Going To Do No Matter What. And with outsourcing, the problems can be both harder to detect and potentially more severe. Because what you don't want is Something Upper Management Will Be Told Is Going Great, if it's really not.
+ TrackBacks (0) | Category: Business and Markets
November 9, 2010
The same paper I was summarizing the other day has some interesting data on the 1998-2007 drug approvals, broken down by country and region of origin. The first thing to note is that the distribution by country tracks, quite closely, the corresponding share of the worldwide drug market. The US discovered nearly half the drugs approved during that period, and accounts for roughly that amount of the market, for example. But there are two big exceptions: the UK and Switzerland, which both outperform for their size.
In case you're wondering, the league tables look like this: the US leads in the discovery of approved drugs, by a wide margin (118 out of the 252 drugs). Then Japan, the UK and Germany are about equal, in the low 20s each. Switzerland is in next at 13, France at 12, and then the rest of Europe put together adds up to 29. Canada and Australia put together add up to nearly 7, and the entire rest of the world (including China and India) is about 6.5, with most of that being Israel.
But while the US may be producing the number of drugs you'd expect, a closer look shows that it's still a real outlier in several respects. The biggest one, to my mind, comes when you use that criterion for innovative structures or mechanisms versus extensions of what's already been worked on, as mentioned in the last post. Looking at it that way, almost all the major drug-discovering countries in the world were tilted towards less innovative medicines. The only exceptions are Switzerland, Canada and Australia, and (very much so) the US. The UK comes close, running nearly 50/50. Germany and Japan, though, especially stand out as the kings of follow-ons and me-toos, and the combined rest-of-Europe category is nearly as unbalanced.
What about that unmet-medical-need categorization? Looking at which drugs were submitted here in the US for priority review by the FDA (the proxy used across this whole analysis), again, the US-based drugs are outliers, with more priority reviews than not. Only in the smaller contributions from Australia and Canada do you see that, although Switzerland is nearly even. But in both these breakdowns (structure/mechanism and medical need) it's the biotech companies that appear to have taken the lead.
And here's the last outlier that appears to tie all these together: in almost every country that discovered new drugs during that ten-year period, the great majority came from pharma companies. The only exception is the US: 60% of our drugs have the fingerprints of biotech companies on them, either alone or from university-derived drug candidates. In very few other countries do biotech-derived drugs make much of a showing at all.
These trends show up in sales as well. Only in the US, UK, Switzerland, and Australia did the per-year-sales of novel therapies exceed the sales of the follow-ons. Germany and Japan tend to discover drugs with higher sales than average, but (as mentioned above) these are almost entirely followers of some sort.
Taken together, it appears that the US biotech industry has been the main driver of innovative drugs over the past ten years. I don't want to belittle the follow-on compounds, because they are useful. (As pointed out here before, it's hard for one of those compounds to be successful unless it really represents some sort of improvement over what's already available). At the same time, though, we can't run the whole industry by making better and better versions of what we already know.
And the contributions of universities - especially those in the US - has been strong, too. While university-derived drugs are a minority, they tend to be more innovative, probably because of their origins in basic research. There's no academic magic involved: very few, if any, universities try deliberately to run a profitable drug-discovery business - and if any start to, I confidently predict that we'll see more follow-on drugs from them as well.
Discussing the reasons for all this is another post in itself. But whatever you might think about the idea of American exceptionalism, it's alive in drug discovery.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why
November 8, 2010
Here's an excellent background article on epigenetics, especially good for getting up to speed if you haven't had the opportunity to think about what gene transcription must really be like down on a molecular level.
This also fits in well with some of the obituaries that I and others have written for the turn-of-the-millennium genomics frenzy. There is, in short, an awful lot more to things than just the raw genetic code. And as time goes on, the whole the-code-is-destiny attitude that was so pervasive ten years ago (the air hasn't completely cleared yet) is looking more and more mistaken.
+ TrackBacks (0) | Category: Biological News
I noticed this editorial in Nature Structural and Molecular Biology, on getting scientific results out to the public. It's worth reading, but not in the way that they think. It starts out reasonably well:
As members of the research community, we know we can't rely on the popular media to correct the misperceptions the public might harbor about science-related issues. According to a 2009 Pew Research Center survey of Americans, carried out in conjunction with the American Association for the Advancement of Science (AAAS), 76% of scientists feel the media do not adequately distinguish between substantial findings and those that are unfounded. Although it would be easy to say that the public “just doesn't get it,” the burden of passing along the understanding and implications of contemporary science falls squarely on the shoulders of those actively engaged in funding, publishing and carrying out research.
That's been said before, as the editorial itself notes, but it's no less true for all that. And the advice that follows is sound, if still rather boring: when you talk to non-scientists, try to gauge how much they know about the subject (without offending people), lay off the acronyms and jargon, look for helpful (and accurate) analogies, and so on. All fine.
But then the piece floats off into the mist - or, more accurately, floats off into about the year 1976.How do we get the word out to the public? Well, we need public officials on our side, it says. But take heart! "Globally, several world leaders have voiced support for increasing the promotion of science in their countries", and that should cheer anyone up on a rainy Monday morning. How anyone was able to type that line without burying their head in their hands is beyond me.
There's more. "It is important that we engage the public where they are", says the editorial, and I can't argue with that one, since trying to engage 'em where they ain't is unlikely to prove fruitful. And here comes the rain of musty pillows again: "A growing number of organizations and institutions are seeking to do this through several different approaches", says the next line. You can just hear the (unsigned) writer thinking "Dang it, what's the word count supposed to be on this thing again?" Whoever it is goes on to point out that Sloan-Kettering hosts an annual seminar for just that purpose.
It's only in the last couple of lines that anything useful gets said. Because if we agree that the public should know more about science, and if we've decided that we should go where they are to realize that, then the two places I'm sure that they might be found are online and watching TV, and maybe both at the same time. Just under the wire, the editorial manages to mention that there are these things called web sites, and even (quickly and quietly) suggests that people start their own.
I like that one, understandably. And although it's not like I get millions of readers here, I still get a lot more than I ever thought (between 350,000 and 400,000 page views a month these days). Many are people who are already in the sciences, but I continue to hear from readers with no particular science background at all, which makes me very happy indeed.
But how much science do I really get across? Well, it's not like I'm trying to teach people to do drug discovery, since it's unfortunately not well suited to trying at home. What I'd like for all science outreach activities to do, though, is get across what science really is, what research is like, and broadly how it works. There are so many things that people outside the field don't necessarily get to experience or realize: how much time we spend chasing ideas that weren't right, for one. How much time we spend making sure that we made what we thought we made, or that we did what we thought we did, and trying to nail down how much we can believe what we think that we know. How little that is, in many cases, and how we're always getting surprised even in the areas that looked well-understood.
Real scientific research is quite bizarre by the standards of many other occupations, and I don't think that people get to understand that. (I might add that the ways in which science gets compressed for dramatic effect tends to obscure all these things - TV and movie scientists are always so sure of themselves, and get their rock-solid results so quickly). So rather than start off by trying to teach everyone lots of details, I'd rather that more people understood what the whole effort is like. . .
+ TrackBacks (0) | Category: Press Coverage
November 5, 2010
Over at Ars Technica, here's an excellent look at the peer review process, which I last spoke about here. The author, Chris Lee, rightly points out that we ask it to do several different things, and it's not equally good at all of them.
His biggest problem is with the evaluation of research proposals for grants, and that has indeed been a problem for many years. Reviewing a paper, where you have to evaluate things that other people have done, can be hard enough. But evaluating what people hope to be able to do is much harder:
. . .Reviewers are asked to evaluate proposed methods, but, given that the authors themselves don't yet know if the methodology will work as described, how objective can they be? Unless the authors are totally incompetent and are proposing to use a method that is known not to work in the area they wish to use it, the reviewer cannot know what will happen.
As usual, there is no guarantee that the reviewer is more of an expert in the area than the authors. In fact, it's more often the case that they're not, so whose judgement should be trusted? There is just no way to tell a good researcher combined with incompetent peer review from an incompetent researcher and good peer review.
Reviewers are also asked to judge the significance of the proposed research. But wait—if peer review fails to consistently identify papers that are of significance when the results are in, what chance does it have of identifying significant contributions that haven't yet been made? Yeah, get out your dice. . .
And as he goes on to point out, the consequences of getting a grant proposal reviewed poorly are much worse than the ones from getting a paper's review messed up. These are both immediate (for the researcher involved) and systemic:
There is also a more insidious problem associated with peer review of grant applications. The evaluation of grant proposals is a reward-and-punishment system, but it doesn't systematically reward good proposals or good researchers, and it doesn't systematically reject bad proposals or punish poor researchers. Despite this, researchers are wont to treat it as if it was systematic and invest more time seeking the rewards than they do in performing active research, which is ostensibly where their talents lie.
Effectively, in trying to be objective and screen for the very best proposals, we waste a lot of time and fail to screen out bad proposals. This leads to a lot cynicism and, although I am often accused of being cynical, I don't believe it is a healthy attitude in research.
I fortunately haven't ever had to deal with this process, having spent my scientific career in industry, but we have our own problems with figuring out which projects to advance and why. Anyone who's interested in peer review, though, should know about the issues that Lee is bringing up. Well worth a read.
+ TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why
November 4, 2010
A reader sent this paper along the other day. Is it just me, or does it seem a bit odd to talk about how aryl coupling in these systems is traditionally done by (list of metal-catalyzed reactions), which unfortunately involve (list of toxic and/or expensive metals) under (list of rigorous conditions involving oxygen exclusion and protecting groups). . .and then propose as a shiny new alternative: three equivalents of aluminum chloride?
Not that there's anything particularly wrong with aluminum chloride. The workup is much nastier than with the metal-catalyzed couplings, though, and I'd think that the waste stream is also more hefty. And I'm willing to bet that a lot more structures can survive Suzuki coupling conditions than can survive scoops of aluminum chloride, too. But it certainly is a lot cheaper and simpler to set up.
Still, isn't this just more or less the aryl-Friedel-Crafts (Scholl) reaction? And haven't very similar couplings been reported before, many times? This new paper cites a few of these (but not that last one). Maybe it's just the whole "Now we can finally get rid of all that palladium" tone. . .
+ TrackBacks (0) | Category: Chemical News
We can now answer the question: "Where do new drugs come from?". Well, we can answer it for the period from 1998 on, at any rate. A new paper in Nature Reviews Drug Discovery takes on all 252 drugs approved by the FDA from then through 2007, and traces each of them back to their origins. What's more, each drug is evaluated by how much unmet medical need it was addressed to and how scientifically innovative it was. Clearly, there's going to be room for some argument in any study of this sort, but I'm very glad to have it, nonetheless. Credit where credit's due: who's been discovering the most drugs, and who's been discovering the best ones?
First, the raw numbers. In the 1997-2005 period, the 252 drugs break down as follows. Note that some drugs have been split up, with partial credit being assigned to more than one category. Overall, we have:
58% from pharmaceutical companies.
18% from biotech companies..
16% from universities, transferred to biotech.
8% from universities, transferred to pharma.
That sounds about right to me. And finally, I have some hard numbers to point to when I next run into someone who tries to tell me that all drugs are found with NIH grants, and that drug companies hardly do any research. (I know that this sounds like the most ridiculous strawman, but believe me, there are people - who regard themselves as intelligent and informed - who believe this passionately, in nearly those exact words). But fear not, this isn't going to be a relentless pharma-is-great post, because it's certainly not a pharma-is-great paper. Read on. . .
Now to the qualitative rankings. The author used FDA priority reviews as a proxy for unmet medical need, but the scientific innovation rating was done basically by hand, evaluating both a drug's mechanism of action and how much its structure differed from what had come before. Just under half (123) of the drugs during this period were in for priority review, and of those, we have:
46% from pharmaceutical companies.
30% from biotech companies.
23% from universities (transferred to either biotech or pharma).
That shows the biotech- and university-derived drugs outperforming when you look at things this way, which again seems about right to me. Note that this means that the majority of biotech submissions are priority reviews, and the majority of pharma drugs aren't. And now to innovation - 118 of the drugs during this period were considered to have scientific novelty (46%), and of those:
44% were from pharmaceutical companies.
25% were from biotech companies, and
31% were from universities (transferred to either biotech or pharma).
The university-derived drugs clearly outperform in this category. What this also means is that 65% of the pharma-derived drugs get classed as "not innovative", and that's worth another post all its own. Now, not all the university-derived drugs showed up as novel, either - but when you look closer, it turns out that the majority of the novel stuff from universities gets taken up by biotech companies rather than by pharma.
So why does this happen? This paper doesn't put it one word, but I will: money. It turns out that the novel therapies are disproportionately orphan drugs (which makes sense), and although there are a few orphan-drug blockbusters, most of them have lower sales. And indeed, the university-to-pharma drugs tend to have much higher sales than the university-to-biotech ones. The bigger drug companies are (as you'd expect) evaluating compounds on the basis of their commercial potential, which means what they can add to their existing portfolio. On the other hand, if you have no portfolio (or have only a small one) than any commercial prospect is worth a look. One hundred million dollars a year in revenue would be welcome news for a small company's first drug to market, whereas Pfizer wouldn't even notice it.
So (in my opinion) it's not that the big companies are averse to novel therapies. You can see them taking whacks at new mechanisms and unmet needs, but they tend to do it in the large-market indications - which I think may well be more likely to fail. That's due to two effects: if there are existing therapies in a therapeutic area, they probably represent the low-hanging fruit, biologically speaking, making later approaches harder (and giving them a higher bar to clear. And if there's no decent therapy at all in some big field, that probably means that none of the obvious approaches have worked at all, and that it's just a flat-out hard place to make progress. In the first category, I'm thinking of HDL-raising ideas in cardiovascular and PPAR alpha-gamma ligands for diabetes. In the second, there are CB1 antagonists for obesity and gamma-secretase inhibitors in Alzheimer's (and there are plenty more examples in each class). These would all have done new things in big markets, and they've all gone down in expensive flames. Small companies have certainly taken their cuts at these things, too, but they're disproportionately represented in smaller indications.
There's more interesting stuff in this paper, particularly on what regions of the world produce drugs and why. I'll blog about again, but this is plenty to discuss for now. The take-home so far? The great majority of drugs come from industry, but the industry is not homogeneous. Different companies are looking for different things, and the smaller ones are, other things being equal, more likely to push the envelope. More to come. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why
November 3, 2010
This article is getting the "cure for the common cold" push in a number of newspaper headlines and blog posts. I'm always alert for those, because, as a medicinal chemist, I can tell you that finding a c-for-the-c-c is actually very hard. So how does this one look?
I'd say that this falls into the "interesting discovery, confused reporting" category, which is a broad one. The Cambridge team whose work is getting all the press has actually found something that's very much worth knowing: that antibodies actually work inside human cells. Turns out that when antibody-tagged viral particles are taken up into cells, they mark the viruses for destruction in the proteosome, an organelle that's been accurately compared to an industrial crushing machine at a recycling center. No one knew this up until now - the thought had been that once a virus succeeds in entering the cell, that the game was pretty much up. But now we know that there is a last line of defense.
Some of the press coverage makes it sound as if this is some new process, a trick that cells have now been taught to perform. But the point is that they've been doing it all along (at least to nonenveloped viruses with antibodies on them), and that we've just now caught on. Unfortunately, that means that all our viral epidemics take place in the face of this mechanism (although they'd presumably be even worse without it). So where does this "cure for the common cold" stuff come in?
That looks like confusion over the mechanism to me. Let's go to the real paper, which is open-access in PNAS. The key protein in this process has been identified as tripartite-motif 21 (TRIM21), which recognized immunoglobin G and binds (extremely tightly, sub-nanomolar) to antibodies. This same group identified this protein a few years ago, and found that it's highly conserved across many species, and binds an antibody region that never changes - strong clues that it's up to something important.
Another region of TRIM21 suggested what that might be. It has a domain that's associated with ubiquitin ligase activity, and tagging something inside the cell with ubiquitin is like slapping a waste-disposal tag on it. Ubiquinated proteins tend to either get consumed where they stand or dragged off to the proteosome. And sure enough, a compound that's known to inhibit the action of the proteosome also wiped out the TRIM21-based activity. A number of other tests (for levels of ubiquitination, localization within the cell, and so on) all point in the same direction, so this looks pretty solid.
But how do you turn this into a therapy, then? The newspaper articles have suggested it as a nasal spray, which raises some interesting questions. (Giving it orally is a nonstarter, I'd think: with rare exceptions, we tend to just digest every protein that gets into the gut, so all a TRIM21 pill would do is provide you with a tiny (and expensive) protein supplement). Remember, this is an intracellular mechanism; there's presumably not much of a role for TRIM21 outside the cell. Would a virus/antibody/TRIM21 complex even get inside the cell to be degraded? On the other hand, if that kept the virus from even entering the cell, that would be an effective therapy all its own, albeit through a different mechanism than ever intended.
But hold on: there must be some reason why this mechanism doesn't always work perfectly - otherwise, no nonenveloped virus would have much of a chance. My guess is that the TRIM21 pathway is pretty efficient, but that enough viral particles miss getting labeled by antibodies to keep it from always triggering. If that's true, then TRIM21 isn't the limiting factor here - it's antibody response. If that's true, then it could be tough to rev up this pathway.
Still, these are early days. I'm very happy to see this work, because it shows us (again) how much we don't know about some very important cellular processes. Until this week, no one ever realized that there was such a thing as an intracellular antibody response. What else don't we know?
+ TrackBacks (0) | Category: Biological News | Infectious Diseases
November 2, 2010
Medicinal chemists spend an awful lot of time working with SAR, structure-activity relationship(s). That's how we think: hmm, what happens if I put a chloro there? If I make that ring one size larger? If I flip that stereocenter/add a nitrogen/tie back that chain? Ideally, you pick up on a trend that you can exploit to give you a better compound, but the problem is, no SAR trend lasts forever. Methyl's good, ethyl's fine, anything bigger falls off the cliff - that sort of thing.
Activity "cliffs" of this sort are the subject of a paper earlier this year in the Journal of Chemical Information and Modeling. (For some earlier approaches to this same type of question, see here, here, here, and especially here).This group (from Germany) looked over several public SAR databases and used a new algorithm to extract "matched molecular pairs", which are compounds that differ only at one point in their structure. And what they were looking for wasn't the orderly progressions; they were after the changes that tended to suddenly change the activity of a compound by at least 100-fold. Were there, they wondered, functional group shifts that have a greater or lesser chance of doing that, over a wide range of targets and compound classes?
It looks like there are, and they're the transformations that you might well imagine. Messing around with a carboxyl group, for example, seems rarely to be a neutral event. Carboxylates are so relentlessly polar and hydrogen-bonding that your SAR is probably going to love 'em or hate 'em. The next two liveliest groups were carbonyls (in general) and amines. Of less interest (but equally believable) is the transformation from methyl to bulky alkyl (or vice versa, which is the direction I'd recommend people try to go if at all possible - other things being equal, no one should grease up their compounds unless there's absolutely no choice).
Well, it needs no ghost come from the grave to tell us this, either. How about any surprises? Adding a secondary hydroxyl group was surprisingly silent, compared to what you might picture. And switching from secondary to tertiary amines (just with methyl groups) is a much less conservative switch than you might imagine, with several huge activity shifts across different target classes. Introduction of methyl ethers rarely affected things much one way or another, and that might account for the low tendency of dimethylamine-to-morpholine doing anything. Small halogens on aryl rings (fluorine, chlorine) had low potential to cause big shifts, with ortho-chloros showing no examples of that happening at all. Oddly (at least to me) was the fact that morpholine-to-alkylpiperazine showed almost no big changes, either.
But it has to be emphasized that these are (1) averages and (2) averages over a large (but not gigantic) data set. For example, one of the "no changes at all" transformations is a favorite med-chem isostere, thiophene for phenyl. And that's true - most of the time, that does nothing. But I've seen two examples in my career when that one actually caused a big change in activity, so it's rare, but not impossible. That's the thing that makes med-chem so enjoyable and so frustrating at the same time. It's full of things (like actually discovering a drug) that are rare, but not quite impossible.
+ TrackBacks (0) | Category: Life in the Drug Labs
November 1, 2010
There seems to be some disagreement within the US government on the patentability of human genes. The Department of Justice filed an amicus brief (PDF) in the Myriad Genetics case involving the BRCA genes, saying that it believes that genes are products of nature, and therefore unpatentable.
But this goes opposite to the current practice of the US Patent and Trademark Office, which does indeed grant such patents. No lawyers from the PTO appear on the brief, which may be a significant clue as to how they feel about this. And at any rate, gene patentability is going to be worked out in the courts, rather than by any sort of statement from any particular agency, which takes us back to the Myriad case. . .
+ TrackBacks (0) | Category: Biological News | Patents and IP
This article reminds me of the "designer drug" era in the 1980s. The Wall Street Journal profiles one of the many European chemical entrepreneurs making a fortune by synthesizing and selling new psychoactive drugs. And they're all labeled "Not For Human Consumption", so hey, everything's perfectly legal. Until the authorities ban the specific substance, naturally, and then he moves on to another one down the list.
As someone who doesn't see a new chemical structure go into humans until years of testing have been done, you can imagine what I think of this. The small amount of amazement I feel is completely overwhelmed by contempt for anyone who would dose people with an untried CNS drug. Oh, but he's not dosing anyone, is he? All he's doing is selling them little vials of white powdery stuff for $30/gram, and it says right on the label that they're not supposed to take it. Right? How people like this sleep at night is a continuing mystery to me.
Making new psychoactive drugs is not that hard. There are plenty of chemotypes out there that will drop you right into the CNS receptors. In many cases, it looks like this guy and his ilk are hanging single-atom changes off of existing drugs. They also monitor the chemical literature, specifically mentioning papers by David Nichols of Purdue, who's well aware of what's going on (and has the same reaction I do). No, there are plenty of small changes to ring on known scaffolds; it's not like anyone's having to invent any new chemical classes here.
So, how do they make these things in quantity? The article treats a rota-vap as an exotic piece of equipment, so we're not going to learn too much from it. But I imagine that there's a lot of used lab equipment floating around, which must help. But the article also mentions that this particular business has labs in the Netherlands and Scotland, outfitted with custom stainless-steel gear made by a welder, so as to not draw attention by buying standard chemistry apparatus. (This is as good a time as any to mention that one of the things that irritates me about these people is the way they make owning any kind of chemistry equipment at home instantly suspicious in the eyes of the law).
That takes a back seat, though, to my feelings about the other aspects of this business. I'm not, admittedly, a good person to ask about recreational drug use, because I don't use any. I have what I think are well-justified reasons for avoiding the whole spectrum, from alcohol on up. The more I've learned about brain chemistry, the less inclined I am to mess around with it.
But even if you take a more lenient attitude, I don't see how the sort of business that this article details can be excused. Advocates for decriminalized various drugs often make the point that we know what their effects are, and that society would be better off dealing with them than dealing with the effects of trying to suppress the drugs themselves. They may be right, actually - I haven't made up my mind about that one yet - but this line of thought can't extend to the new-drug-of-the-month-club. We don't know what the effects of these substances are, what neurological damage they might do, and what other side effects they might have. That's for the customers to find out! Here's the safety testing method this moron uses:
Mr. Llewellyn, meanwhile, is unfazed. He boasts that his safety testing method is foolproof: He and several colleagues sit in a room and take a new product "almost to overdose levels" to see what happens. "We'll all sit with a pen and a pad, some good music on, and one person who's straight who's watching everything," he says.
Well, fine, then. Foolproof! This sort of thing shows that nothing is foolproof, because fools are just too ingenious. I'm ashamed to share a phylum with these people, much less a scientific discipline.
+ TrackBacks (0) | Category: The Central Nervous System | The Dark Side