About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
October 30, 2009
Here's a most interesting graph from the latest issue of Nature Reviews Drug Discovery. It's from an article on trying to discern trends from broad-scale literature analysis, and it's worth a separate blog post of its own (coming shortly). But after yesterday's discussion of whether there are too many graduates in science and engineering, this looked useful.
Note, for example, the ramp up in NIH funding in the late 1950s/ early 1960s (a very large change in percentage terms), which was followed by a similar surge in doctorates granted. The late-1990s funding increases seem to be having a similar effect near the end of the chart.
Note also the well-publicized drug drought - but the historical perspective is interesting. We've clearly fallen off the 1970-2000 trend line of increasing drug approvals, but we seem to be stabilizing at roughly a 1980s level. The argument is whether that's where we should be or not. We have all these new tools, but all these new worries. Lots of new targets, but fewer good ones like the old days. Many new tools, but plenty of difficult-to-interpret data generated from them. And so on. But 1985 is apparently about where the balance of all these things is putting us.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why
October 29, 2009
Here's one to get your attention: there's been a lot of arguing (on this blog and others) about the continual talk of shortages of scientists and engineers. That's a little hard to take for the number of people who've been laid off from this industry over the last two or three years and who often are having trouble finding a new position.
A study from Rutgers and Georgetown now says, though, that there is no such shortage. Here's the PDF, so you can check it out for yourself. The intro:
A decline in both the quantity and quality of students pursuing careers in science, technology, engineering, and mathematics (STEM) is widely noted in policy reports, the popular press, and by policymakers. Fears of increasing global competition compound the perception that there has been a drop in the supply of high-quality students moving up through the STEM pipeline in the United States. Yet, is there evidence of a long-term decline in the proportion of American students with the relevant training and qualifications to pursue STEM jobs?
In a previous paper, we found that universities in the United States actually graduate many more STEM students than are hired each year, and produce large numbers of top- performing science and math students. In this paper, we explore three major questions: (1) What is the “flow” or attrition rate of STEM students along the high school to career pathway? (2) How does this flow and this attrition rate change from earlier cohorts to current cohorts? (3) What are the changes in quality of STEM students who persist through the STEM pathway?
What they're finding is (again) that there's no shortage of graduates - in fact, quite th opposite, unfortunately for wages and employment. One worrisome thing, though, is that at some point in the mid-to-late 1990s the top-performing students at both the high school and college level began to jump ship from the science/engineering fields. There are several possible explanations, but the one that comes to mind is that students are looking ahead a bit and don't like the prospects that they see and/or are lured by other fields that seem more attractive.
More on this later - for now, here's some commentary over at Science which shows that the arguing has already begun.
+ TrackBacks (0) | Category: Business and Markets | Who Discovers and Why
Here are a few more of those questions that medicinal chemists have to deal with from time to time. Most of these have no definitive answers (which is why they keep coming up!)
1. You're making a compound that looks to be important in the project - maybe even the clinical candidate, if things go right. But there's a step in the synthesis which - while it does work - is clearly not something that's going to scale up too well. You need more compound right now, and you can push things through. But you're eventually going to have to ditch that step (unless this compound gets overtaken by another one), so. . .when's the right time to worry about that?
2. Your compound series is in a pretty crowded patent landscape. In fact, another application has just published that really looks to be breathing down your neck. Of course, that means the work in it was done a year and a half ago (or more). Can you assume that Company X has followed the same course that you have, and has already investigated the series you're working on? Should you drop them, or go in in the chances that six months from now another application will drop that covers you like a tarp?
3. You're finally writing up one of your old projects for publication. But it's been a while, and the details of what happened are not as sharp as they were when thing were going on. What's more, on looking the work over, you realize that there are some obvious gaps in it, stuff that didn't look that way at the time, but sure does so now. You can write things up to make it look more coherent, but only by rearranging the way it really happened. Where do you draw the line?
4. Your lead compound is ready to go into toxicology testing, the last big step before declaring victory and naming it as the development candidate. Trouble is, there's something funny about it in rats. They just don't get the blood levels that mice and dogs do, and your tox people would really, really rather run the tox study in rats (since that's the standard, and what they have the most comparison data for). Update: I mistakenly switched rodents mentally this morning on the train, now they're switched back to what they should be). You can get the blood levels up to where they need to be - but only by using a dosing vehicle that might have problems of its own, and that the toxicologists haven't had much experience with either. What to do?
+ TrackBacks (0) | Category: Life in the Drug Labs
October 28, 2009
Now here's a completely weird idea: a group in Korea has encapsulated individual living yeast cells in silica. They start out by coating the cells with some charged polymers that are known to serve as a good substrate for silication, and then expose the yeast to silicic acid solution. They end up with hard-shell yeast, sort of halfway to being a bizarre sort of diatom.
The encapsulated cells behave rather differently, as no doubt would we all under such conditions. After thirty days in the cold with no nutrients, the silica-coated yeast is at least three times more viable than wild-type cells (as determined by fluorescent staining). On the other hand, when exposed to a warm nutrient broth, the silica-coated yeast does not divide, as opposed to wild-type yeast, which of course takes off like a rocket under such conditions. They're still alive, but just sitting around - which makes you wonder what signals, exactly, are interrupting mitosis.
The authors tried the same trick on E. coli bacteria, but found that the initial polymer coating step killed them off. That's disappointing, but not surprising, given that disruption of the bacterial membrane with charged species is the mode of action of several broad-spectrum antibiotics.
"Hmmm. . .so what?" might be one reaction to this work. But stop and think about it for a minute. This provides a new means to an biological/inorganic interface, a way to stich cell biology and chemical nanotechnology together. If you can layer yeast cells with silica and they survive (and are, in fact, fairly robust), you can imagine gaining more control over the process and extending it to other substances. A layer that could at least partially conduct electricity would be very interesting, as would layers with various-sized pores built into them. The surfaces could be further functionalized with all sorts of other molecules as well for more elaborate experiments. No, this could keep a lot of people busy for a long time, and I suspect it will.
+ TrackBacks (0) | Category: Biological News
Johnson & Johnson's CEO has given an interview to the Financial Times explaining his company's strategy with acquisitions. And right now, that strategy is. . .not to make acquisitions. They see partnerships as making a lot more sense:
“The cost of developing compounds has become so high and become so risky that we are looking to share the risks and opportunities and find more and more partnerships.”
J&J has been putting this into practice recently, taking equity stakes in several different companies. In the case of Elan and Crucell, interestingly, the company has agreed to standstill provisions, in order to make it clear that they're not just on the first step to an outright acquisition any time soon. It's interesting that this would be coming from Johnson & Johnson, since in many cases they've been one of the less destructive acquirers in the business already. (Well, with some exceptions, like when they took over Scios).
The temptation to compare this policy with Pfizer's is almost overwhelming, but the two companies are in very different positions. For one thing, J&J has their medical devices and diagnostics businesses, which are both profitable and run on different rhythms than their pharma side. Even more importantly, they also aren't locked into a grow-or-die situation, needing larger and larger infusions of revenue to meet the expenses which get larger every time they go out and buy those revenue streams, which mean that they need to go buy some more and then. . .
The article says that J&J has no deals under consideration right now, but that this style of deal-making is definitely how the company plans to operate. There's definitely enough risk to be spread around - I just hope that there's enough reward for everyone, too.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
October 27, 2009
I've been occupied all morning with voodoo. Well, the technical name for it is catalytic hydrogenation, but let's call it for what it is: witchcraft. It's a widely used reaction in organic chemistry, and you can use it to reduce all kinds of different functional groups on your molecules. But once you get off the well-traveled roads, it's all jungle drums at midnight.
One reason we chemists like this reaction so much is that it's simple. You add some dark insoluble powder to your compound - which is some metal like palladium, platinum, nickel or the like, adsorbed onto carbon black or another solid. Then you add solvent and put the whole thing under an atmosphere of hydrogen gas. That soaks into the metal particles, your compound sits on them and gets magically reduced, and after a while you filter everything off and there's your clean, transformed product.
Most of the time. You'll note that I've skipped over a lot of variables there. For one thing, there's the choice of metal catalysts. Pt and Pd get the most use, but they come on a variety of solid supports. Carbon, alumina, barium sulfate, calcium carbonate. . .they all act differently. And don't stop with those guys: nickel's not to be ignored, then rhodium's available, and even ruthenium if you want to crank up the pressure. The pressure of all that hydrogen, there's another variable. Just a balloon on top, atmospheric pressure? Or put in a thick glass bottle on a shaker and turn it up to 50 pounds per square inch? Higher, in a metal apparatus? And what temperature did you have in mind? Ambient, or would you like to heat things up? Remember, as the pressure goes up, so does the temperature you can run the solvents up to.
Ah yes, the solvents. A lot of the time you see this work done in methanol or ethanol, but the reactions will often go quite differently in ethyl acetate or even something less polar. I've even seen some done in dichloromethane, although that somehow just seems wrong. Acids often have a profound effect on things, particularly if there's a basic amine in your compound.
And I haven't mentioned poisoned catalysts yet, have I? A bit of lead, or the addition of (non-protonated) amines or sulfur-containing compounds can dial down the reactivity of a lot of these metals - often down to zero, but sometimes to a useful level that you can't reach any other way. And then there's transfer hydrogenation, where you don't use the gas itself, but let some other compound give up hydrogen inside the reaction and transfer it over to your substrate. Paraformaldehyde, formic acid, phosphites, cyclohexene - all of those will work, and they can all work differently.
So. . .how many variations are we up to? Do you want to use 5% palladium on carbon in methanol, room temperature at 50 psi? Or platinum oxide in acetic acid at 50 degrees? Rhodium on alumina, ethanol, 100 psi at 100 C? Or wet 10% platinum catalyst with formic acid? That should get you started on this simple, well-known reaction. I've run 22 of them in the last two days, with the assistance of the H-Cube reactor, and I have to say: I'm about hydrogenated out.
+ TrackBacks (0) | Category: Life in the Drug Labs
October 26, 2009
I wrote about this topic a few years ago, and thought I'd update it. Many chemists find themselves looking at a periodic table and wondering "How many of these things have I personally handled?" My list is up to nearly 45 elements (there are a couple that I've got to think about, one-off catalyst reactions from twenty-two years ago and the like). And there are at least 29 that I hope to never use at all, since they're radioactive and I'm generally not in the mood for that. So what does that leave me?
Well, I've never used beryllium, although it's not that I'm tapping my foot waiting for any. It's pretty toxic stuff, for the most part, and there are hardly any organic chemistry reactions that get near it. That means that I can't even think what I might use it for, and I could easily go my whole career without seeing any.
The next lowest molecule weight element I haven't messed with (excluding unreactive neon, which you at least get to see in its excited state) is probably scandium. That whole first column of transition metals is pretty useless for organic chemists, to be honest (Yttrium? Lanthanum?), and I've never seen any reactions that leapt out at me as things I had to try. No, if the answer is scandium, it must have been a pretty odd question.
Next up, I haven't used either of the G twins, gallium and germanium. They're not too well studied compared to their family members above and below: aluminum and even indium are more widely used than gallium, and silicon and tin show up in organic labs a million times more often than germanium. But with those relatives, you'd have to think that there's something interesting that can be done with these, so it depends on whether anyone finds out what that might be during the rest of my chemistry career.
And right next to these is arsenic, which I've also managed to avoid. It's famously poisonous, although it's really not worse than a lot of other things that get used much more often. But again, there's not a lot of compelling chemistry to be done with the stuff, not that I know of, anyway, and there are always those unfortunate nomenclature problems to be dealt with, especially if you have a British accent.
Krypton I've never had a use for, and I'd have to rate the chances as very low indeed. In the next row, I've handled strontium chloride, but only to make red-colored flames for a school demonstration show. I have yet to touch yttrium, as mentioned above, and I've managed to miss zirconium so far as well. There are actually a number of organometallic reactions that use that one, so it's at least a real possibility. Niobium I have yet to encounter, and at the rate it's used, I probably never will. Cadmium's another toxic beast - there are some old reactions that use organocadmiums, but I can't think when I saw a modern reference that used any of them, and I don't see this one in my future, either. Antimony I might use if I never need some horrible superacid. Tellurium, well. . .there would have to be a pretty good reason, given its reeking, nose-wrinkling sulfur and selenium relatives, but someone might yet come up with one. Can't rule that one out, unfortunately.
Now we're getting into the heavy metals, and a lot of gaps start to appear. Has anyone in an organic chemistry lab ever used hafnium or tantalum? Didn't think so. The best candidate for "something I could use, but haven't" in this bunch is osmium. The tetroxide is a very useful reagent that I just haven't had the need for. It wouldn't surprise me if that's the next addition to my list. I've no desire whatsoever to use thallium. It's part of a short run of nasties that you hit right after the jewelry metals - you have your platinum, then gold, and you think you're in the high-rent district, and suddenly it's mercury, thallium, and lead right in a row. Reminds me of the way towns were stuck next to each other in New Jersey.
And as far as the lanthanides, well, I've used cerium as a TLC stain, and once I used samarium iodide - which, true to its reputation, didn't work. None of the others have I touched, and unless I need some funky NMR shift reagent, which fewer and fewer people do these days, I don't see it happening. There are a lot of funny rare earths down there, but little reason for an organic chemist to go digging around among them.
Weirdest element I actually have handled? Xenon would have to be the winner - I've used the difluoride, and yes, that was the recourse of a desperate chemist. But it did work to turn a silyl enol ether into an alpha-fluoro ketone, so I can't say anything bad about it, other than its rather penetrating smell, which I probably should have taken more care not to experience. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
October 23, 2009
Organometallic reagentss come from large tribes, and there are always wild cousins up in the hills. A good place to look for the livelier ones is in the simplest alkyl derivatives, and you should go all the way down to the methyls if you want to know their real character. Ignore the halides. Methylmagnesium bromide you can get in multiliter kegs; they might as well sell it in Pottery Barn.
Dimethylmagnesium, though, is not an article of commerce. I've made it myself. So although it's definitely something you want to keep an eye on, I can't very well say that I won't work with it. And the other metals? Dimethyl mercury I will not get within yards of, for very well-founded reasons. Trimethylaluminum is a flamethrower extraordinaire, with a solid reputation among pyromaniacs. I've used the stuff, although I wasn't whistling while I was syringing it out. Handling it in solution, as I did, is less stressful than using the pure stuff - I'd definitely want to sit down and think about that one.
But neat dimethyl zinc. . .no, I don't think so. A colleague of mine made some in graduate school, and came down the hall to us looking rather pale. He'd disconnected a length of rubber tubing from his distillation apparatus and seen it go up in immediate, vigorous flames. "This stuff makes t-butyllithium look like dishwater" is the statement I remember from that evening. You can buy the pure stuff from Alfa, if you're inclined to run a head-to-head comparison. Do make sure to post the video on YouTube; that's as close as I want to get.
One problem is that it's a pretty volatile compound, boiling at 46C, so there's plenty of vapor around to start a party. The diethyl analog is a bit better, but it's nearly as pyrophoric. The Library of Congress discovered this in the 1980s and 1990s, during a long-running project to deacidify old documents. The diethyl zinc reacts with the acid in aged wood-pulp papers, neutralizing it, lightening the color, and stiffening the paper, so you'd think it would be ideal. Well, except for the instant-bursting-into-ravenous-flames part. Making sure that all the reagent was gone before opening the hatch, that was rather important. The pilot plant for this process suffered from some regrettable explosive bonfires before the whole idea was abandoned. Interestingly, one of the biggest problems seems to have been that the treated books were (at least at first) rather odorous, and some colored book covers were initially affected. You can sense a certain testiness about these issues in the Library's final report on the subject:
It has also been established that tight or loose packing of books; the amount of alkaline reserve; reactions of DEZ with degradation products, unknown paper chemicals and adhesives; phases of the moon and the positions of various planets and constellations do not have any influence on the observed adverse effects of DEZ treatment.
You'll notice that the LOC didn't even bother with the dimethyl compound, and I think I'll take a tip from them.
+ TrackBacks (0) | Category: Things I Won't Work With
October 22, 2009
Xconomy has a useful two-part interview with Christopher Henney, who helped to found Icos, Dendreon, and Immunex. The part I found most interesting, naturally, was the section entitled "Five Red Flags of Biotech". (Note to the Xconony folks - the article actually has six of them). Here are his warning signs if you're thinking of investing in (or, I should add, working for!) a new company and you're checking them out. Beware of. . .
1. Top management without a scientific background. If the CEO isn't a scientist, Henney says, there had better be some good ones very close to him, and he's not talking about the scientific advisory board, either.
2. Saying that they have no worries. Any small company in this game has plenty to worry about - heck, the huge companies have plenty to worry about. So if they try to tell you otherwise, then you're the one who should be worrying.
3. Hard-to-understand science. Henney says to look out if they can only tell you that it's really hard to explain. I'd agree with that, but I'd also add that you can go too far in the other direction. If they spout a bunch of advertising copy under the impression that they're giving you the science, then you should also flee. (That might be a consequence of Red Flag #1). I honestly think that any concept in this industry can be explained to any reasonably intelligent person. So if someone tells you that they can't do that, you have to worry that they don't understand it very well themselves.
4. Geographic remoteness. This is an interesting one, because ideas can come from all over. But for a viable company, Henney maintains, you need to be somewhere that you can recruit talented and experienced people. That doesn't mean that every company has to be in Cambridge or South San Francisco, because there are plenty of other possibilities. But trying to get a great biotech idea off the ground will definitely be a lot harder in Winnipeg, El Paso, Chattanooga, or Scranton. There are smart people there, but most of the ones who know this business or have a real interest in it will have gone somewhere else. And it'll be tougher to persuade others to move somewhere that could leave them without options if the company doesn't work out.
5. Too many VCs. This goes for just about any industry. A board that's full of venture capital people shows a lack of imagination at the very least, and it makes you wonder why the VCs will even stick around when all they see are their own kind.
6. Family members in key roles. My take is that you can get away with one sibling or the like, preferably as long as they're not like a CEO and CFO team or something. But I agree with Henney's take that if you see a board dominated by a family, you should hit the exits. This stuff hasn't been around long enough to be a family tradition.
I would add a couple of others to be wary of:
7. Breathless hype. Sure, all press releases have some of this. But if a small company is unable to speak in any other terms than "breakthrough, unprecedented, game-changing paradigm shifts" or the like, you should be worried. Either they don't really believe this stuff (in which case they may not be very trustworthy), or they do (in which case they may be delusional). Real breakthroughs in this business don't need all the glitter and spray paint.
8. Too much emphasis on the SAB. Henney addresses this partly in Red Flag #1. But it's worth remembering that a wonderful blue-ribbon scientific advisory board stacked with Nobel Prize winners is also stacked with very busy people who will only be able to give this little company a small portion of their time. These aren't the folks who will be driving the projects forward. If a small company relentlessly promotes the big-name advisors they've signed up, you have to wonder if there's anything else to promote.
+ TrackBacks (0) | Category: Business and Markets
October 21, 2009
I just wanted to note that the entire BioCentury "Back to School" issue mentioned in the post below can now be read for free (PDF). Thanks to the folks over there for doing this! The original post has been updated as well.
+ TrackBacks (0) | Category: Blog Housekeeping
I wanted to highlight a comment that showed up recently in the latest Pfizer post:
I would just like to point out that there is often mention of Pfizer as being a poorly productive R&D outfit on this blog, but there is rarely any mention of the scientists themselves. Having worked as a chemist at both Merck and also at Pfizer, I would just like to point out that in my experience, the chemists at both are highly productive, extremely hardworking, and passionate individuals. It's a shame that the discussions here do not distinguish between those carrying out the research and the direction of the company overall.
That's true, and although I've put in disclaimers like that in the past, I haven't recently. There should be some sort of default blanket statement for cases like this. I know a lot of people at Pfizer, and they know their stuff. Pfizer's problems are not due to a shortage of smart, competent, hardworking people. Everyone in the industry is having a hard time keeping a good pipeline of drug candidates going these days, no matter how good they are.
But I think that the course that Pfizer has put itself on is making its problems worse, and doing damage to the entire industry at the same time. That actually makes it even more of a tragedy, the fact that they have so many good people there trying to make things work.
+ TrackBacks (0) | Category: Business and Markets
Steve Usdin at BioCentury sent along a reprint of the newsletter's annual "Back to School" issue from last month (available for open access here) in response to my note about "micropharma" the other day. And it's clear that he's been thinking along the same lines. Whether or not this model is going to work is another question, but that looks like something that we're going to be finding out.
As the issue notes, in a pithy quote from Mike Powell of Sofinnova, the key problem is "how to restructure an industry where it costs $100 million to answer a question but people are only willing to pay you $50 million for the answer." Since the amount of money being handed out is probably not going to increase any time soon, the only way out of that dilemma is to find some way for that first figure to go down.
One of the groups that won't be happy about that process are academic centers that are used to seeing their intellectual property as a potentially lucrative source of funds. The strike-it-rich days do not look to be coming back any time soon. Instead, BioCentury advises universities to get ready to adopt a "non-ROI" approach to developing their ideas, by use of grants, public-private consortia, and help from foundations and other nonprofits. (Perhaps a name like "delayed ROI" or, if you're being especially weasely about it, "enhanced ROI", might help that concept go down a bit smoother).
CRO firms are almost certainly going to have to be part of that process, since there are plenty of skills needed to push a drug target or molecule along that are not found in most universities. That, to me, would indicate a real market for a low-cost CRO outfit targeting academia. I'm not sure if anyone is serving that market, or trying to, but it would seem to have some potential in it. Anyone who can help to run should-we-kill-this experiments, without spending too much money getting the answer, will have something that looks to be in demand.
In general, this landscape would mean that ideas will go longer before companies are formed around them, with the idea that they can be tested out a bit without having to build new corporations to do it. (As another quote from the article had it, "The unmet need in the industry is drugs, not companies".) Payoffs will be slower, and they won't be as large when they come, either. Venture capital investors will be asked to have more patience under this model, and that's not something that they're necessarily noted for. And someone's going to have to have the money (and nerve) to form mid-sized organizations that will pick up the best of the things coming out of academia, since many of them still won't be quite ready to go right into a big organization. The non-humungous companies that have survived to this point might step up and fill this role, and BioCentury also suggests that Japanese and Indian companies might fill this space as well.
The big question is: will people be able to put up with this, or not? After all, no one's envisioning failure rates going down, they're just hoping that the failures will happen sooner and cost less money. Will they? It's not like "fail quickly" hasn't been a goal of companies in the business for years now. But sometimes it's hard to fail any other way than slowly (and expensively).
Well, the common theme to all this (and to most of the other crystal-ball reading going on these days) is that the industry isn't going to be able to go on in the way it's been accustomed to. If you ask a hundred people in this business what it's going to look like ten or fifteen years from now, the only thing you could probably get them to agree on is "Not like it does today". We'll just have to wait to see if they're all playing "Cheat the Prophet" or not. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
October 20, 2009
The Wall Street Journal's Health Blog got a chance to ask the higher-ups at Pfizer what their R&D will look like a year from now. Their (understandably) not too-in-depth answers are here: Decentralized research units, with some functions run company-wide, and this quote: "There are elements of drug discovery and development where you just need scale".
Well played! I wouldn't expect anything less. But are there elements of drug discovery and development where scale - massive, ponderous, hundreds-of-vice-presidents scale - actually hurts? I don't think you're going to hear that topic brought up very much at Pfizer, at least not out in the open. And let's not lump those two functions together: drug development benefits from a company's size a lot more than drug discovery does. Once you've gotten to a critical-mass level, sheer size (as far as I can see) does nothing to help productivity in drug discovery, and actually seems to damage it. As evidence for that statement, let me point to Pfizer's internal research record, as opposed to the stuff they've gone out and bought.
And what might be refreshing is an admission that big mergers - drag-on-for-months am-I-going-to-still-be-here mergers - come with an acute productivity penalty no matter what. I may have missed it, but I don't recall hearing anyone from Pfizer say anything like "Although we know that this is going to be a huge disruption, we think that in the end it'll be worth it". No, it always seems to be the Day One, hit-the-ground-running, now-the-synergy-starts stuff, which is just not in sync with reality.
Well, we can come back in a year and see what Pfizer's R&D operation really looks like. But I'll venture a guess: huge. Unwieldy. Not as productive as you'd think it should be. Still rearranging and getting smaller as the company tries to figure out how to make it all work. And looking over its shoulder for the next big acquisition. Anyone want to bet against any of those?
+ TrackBacks (0) | Category: Business and Markets
October 19, 2009
(1) Bnet Pharma on "How Not to Write a Pharma Press Release". Privately held Epeius is sending out bulletins loaded with phrases like "more stunning results" and "Epeius Biotechnologies draws the sword of targeted gene delivery from the stone of chemistry and physics". If they were publicly traded, this would be fun to watch. . .
(2) The rise of Micropharma? We'll come back to this subject:
The drug discovery pipelines of the major pharmaceutical companies have become shockingly depleted, foreshadowing a potential crisis in the ability of Big Pharma to meet the pharmaceutical demands created by the ever-changing spectrum of human disease. However, from this major crisis is emerging a major opportunity, namely micropharma – academia-originated biotech start-up companies that are efficient, innovative, product-focused, and small. In this Feature, we discuss a “new ecosystem” for drug development, with high-risk innovation in micropharma leading to Big Pharma clinical trials. . .
(3) Cleaving amyloid precursor protein into beta-amyloid has long been thought (by many) to be the key pathological event in Alzheimer's. But what about the piece of APP that's left inside the cell?
(4) A favorite post around here for some time has been "Sand Won't Save You This Time", about the wonderfulness of chlorine trifluoride. Well, here's a method to produce very interesting-looking compounds that uses. . .bromine trifluoride. How much do you want these products, that's what you have to ask yourself. To be sure, the authors do mention that "Although commercial, bromine trifluoride is not a common reagent in every organic laboratory, and many chemists do not feel at ease with it because of its high reactivity. . .". You have to go to the Supporting Information file before you start hearing about freshly preparing the stuff from elemental fluorine.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Alzheimer's Disease | Business and Markets
October 16, 2009
I've heard from several sources that today is what they're calling "Day One" at Pfizer. The merger with Wyeth is now official, and word is going to start going out on which sites will stay, which will close, and who will be moved or let go during the entire process.
Problem is, I'm also hearing that (for research, anyway) it could take as long as another sixty days for all the news to come out. We'll see what the real timetable is, but that's enough to make me wonder if there's any way they could have found to make the whole business more excruciating.
But it's a sad day. I think the Pfizer-Wyeth merger is a bad idea which will do bad things. I wish it hadn't happened, just like I wish many of the other mergers on this scale had not happened, and I wish that I could have some hope that this sort of thing won't happen again. But the lessons are taking a long time to be learned.
+ TrackBacks (0) | Category: Business and Markets
There have been several reports over the years of people engineering receptor proteins to make them do defined tasks. They've generally been using the bacterial periplasmic binding proteins (PBPs) as a starting point, attaching some sort of fluorescent group onto one end, so that when a desired ligand binds, the protein folds in on itself in a way to set off a fluorescent resonance energy transfer (FRET). That's a commonly used technique to see if two proteins are in close proximity to each other; it's robust enough to be used in many high-throughput screening assays.
So the readout isn't the problem. But something else certainly is. In a new PNAS paper, a group at the Max Planck Institute in Tübingen has gone back and taken a look at these receptors, which are reported to bind a number of interesting ligands such as serotonin, lactate, and even TNT and a model for nerve gas agents. You can see the forensic applications for those latter two if the technique worked well, and the press releases were rather breathless, as they tend to be. But not only did these workers claim a very interesting sensor system, but they also went out of their way to emphasize that they arrived at these results computationally:
Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, l-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or l-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally.
The Max Planck group would like to disagree with that. Their PNAS paper is entitled "Computational Design of Ligand Binding is Not a Solved Problem". They were able to get crystals of the serotonin-binding protein, but could not get any X-ray structures that showed any serotonin binding in the putative ligand pocket. They then turned to a well-known suite of techniques to characterize ligand binding. One of these is thermal stability: when a protein is binding a high-affinity ligand, it tends to show a higher melting point, since its structure is often more settled-down than the open form. None of the reported receptors showed any such behavior, and all of them were substantially less thermally stable than the wild-type proteins. Strike one.
They then tried ITC, a calorimetry measurement to look for heat of binding. A favorable binding event releases heat - it's a lower-energy state - but none of the engineered receptors showed any changes at all when their supposed ligands were introduced. Strike two. And finally, they turned to NMR experiments, which are widely used to determine protein structure and characterize binding of small molecules. WIld-type proteins of this sort showed exactly what they should have: big conformational changes when their ligands were present. But the engineered proteins showed almost no changes at all. Strike three, and as far as I'm concerned, these pieces of evidence absolutely close the case. These so-called receptors aren't binding anything.
So why do they show FRET signals? The authors suggest that this is some sort of artifact, not related to real receptor binding and note dryly that "Our analysis shows the importance of experimental and structural validation to improve computational design methodologies".
I should also note a very interesting sidelight: the same original research group also published a paper in Science on turning these computationally engineered PBPs into a functional enzyme. Unfortunately, this was retracted last year, when it turned out that the work could not be reproduced. Some wild-type enzyme was still present as an impurity, and when the engineered protein was rigorously purified, the activity went away. (Update: more on this retraction here, and there is indeed more to it). It appears that some other results from this work may be going away now, too. . .
+ TrackBacks (0) | Category: Biological News
October 15, 2009
A couple of articles have come together and gotten me to thinking. Back during the summer, long-time medicinal chemist Mark Murcko published a short editorial in Drug Discovery Today comemmerating the Apollo 11 moon landing's 40th anniversary:
"People like me, who are old enough to actually remember the events of July 1969, are instantly assailed with powerful and reflexive emotions when we think back to the effect Apollo had on us: the excitement, awe and wonder. My family, like so many others, was obsessed with space exploration. The walls of our den were covered with NASA photos, diagrams and technical bulletins – anything we could get them to send us. Models of rockets hung from the ceiling by fishing line. . .We soaked it all in, and the events of that day remain a seminal memory of my childhood. It was glorious; nothing could possibly be more exhilarating.
And yet...there are some interesting parallels to what all of us, engaged in the roiling tumult of biomedical research, do here and now. Our mission – to invent new therapies that transform human health and alleviate suffering – captures the imagination as profoundly as did Apollo. Our efforts once were regarded with the same admiration as the NASA breakthroughs (and while public perceptions may be different today, our mission has not wavered). We are attempting, one could argue, even more complex technical achievements. . . ."
And just the other day I came across this piece in The New Atlantis entitled "The Lost Prestige of Nuclear Physics". (Via Arts and Letters Daily). Its thesis, which I think is accurate:
"The story of nuclear physics is one of the most remarkable marketing disasters in intellectual history. In the space of a few decades, the public perception of the atom’s promise to serve humanity, and the international admiration that surrounded the many brilliant people who unraveled the mysteries of matter, had collapsed. So pronounced was the erosion of attitudes toward nuclear physics that, by the late 1990s, several European physicists felt it necessary to establish an organization called Public Awareness of Nuclear Science for the explicit purpose of improving the public image of their discipline."
Of course, in that case, there was that little matter of the atomic bomb and the subsequent arms race) to contrast against the excitement of the scientific discoveries and their peaceful uses. One might argue that for the general public, it was all very admirable to be able to figure out the forces that kept atoms together, but when these forces turned out to have such alarming and immediate real-world consequences, the backlash was profound. And while I sympathize with the nuclear physicists, I have to only wish them luck in their attempts to regain a good public image. That's because those consequences are still very much with us, as a glance at the news will show.
But the fall from grace of drug research has been almost as profound, and we've never developed an equivalent of nuclear weapons, have we? In our case, I think the problem has been that we're a business. We bill people for our discoveries when they work. And as I've argued here, people will always have a much more emotional response to any issue that affects their physical health, and can quickly come to resent anyone that charges them money to maintain it. (Doctors, though, benefit from the one-on-one patient relationship. People hate hospitals, hate health insurance companies, and hate drug companies, but still respect their own physicians). This, as manifested by complaints about drug prices, uneasiness about hard-sell advertising, and suspicion about our motivations and our methods, seems to be what's sent public opinion of us into the dumper.
But in the end, Murcko has a point. We really are doing something good for humanity by working on understanding diseases and trying to find treatments for them. Not everything about the process is optimal, for sure, but can anyone argue that the broad effort of pharmaceutical research has been a bad thing? The problem is, it's easy to look around, and slide from there into self-pity. But moaning about how no one appreciates us is a waste of time. The best cure is, as far as I can see, to give people reasons to realize what we're worth.
People who've been pulled back from the brink of death from infectious disease or cancer already have those reasons. But there are so many terrible unmet medical needs still out there, which means that there's plenty of room for us both to do good and to show that we can do good. Yes, it will cost a lot of money to do that, which means that what cures will come will also cost money. But with the partial exception of air to breath, most of the necessities of life tend to involve money changing hands. That's not a disqualification.
So to the readers out there in the industry - go do some good work today. Don't spend too much time in your more useless meetings. Stand up in front of your fume hood or sit down in front of your keyboard and do something worthwhile. It's a worthwhile job, even if some people don't realize that yet.
+ TrackBacks (0) | Category: Drug Industry History | Drug Prices | Why Everyone Loves Us
October 14, 2009
Most chemistry departments in the drug industry have some academic consultants who come in every so often. The idea is that they'll have some useful suggestions about synthetic problems (there aren't so many academic consultants who are useful on drug discovery questions as opposed to pure chemistry ones). At the companies where I've worked, the consultants will spend the day in a conference room, while project teams troop in and out with presentations.
How useful this process is varies, to say the least. The first variable is the consultant, because some people are just better at that sort of thing. Ideally, you want someone who has a lot of ideas, has them relatively quickly, and enjoys putting them out for people to comment on them. Not everyone fits that description. While those can all be useful qualities, there are plenty of world-class scientists whose working style doesn't fit those requirements, and these people tend to be less valuable for drop-in sessions.
Another variable is the sorts of problems the drug discovery teams are dealing with. We try, in the industry, to reduce our chemistry to the simplest possible routes. Time is money (and money is money, too), and we always need methods that will reliably crank out plenty of different analogs without a lot of work. When that works, it often doesn't lead to especially exciting chemistry - in fact, the Venn diagram would show that "smoothly running project" and "exciting chemistry" don't overlap much. That means that the projects where things are going fine don't have much to talk about when the consultants appear, and those sessions sometimes end up spending more time on peripheral problems.
Much of the time, too, the biggest problems aren't chemical ones. If you're having trouble with metabolism, tox, or absorption, there aren't going to be many consultants who can help you out. Most of the ones who can are ex-industry people. (And with problems like these, sometimes no one can help you out at all). But asking someone about oral bioavailability when their research is all about interesting new synthetic organic methods is a waste of time - yours and theirs.
I've had some useful and interesting consulting sessions over the years, but some really disastrous ones, too. Many of the latter feature those "Well, now what do we talk about?" moments, which seem to be a cue for Satan to emerge and fill out the hour. So plan ahead. Make sure that you've got plenty to talk about. Actually, you'd better have more than you think you'll need, because some of your topics may either get a fast answer, or an equally fast shrug of the shoulders. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
October 13, 2009
I've been meaning for some time to acknowledge whoever it is at Angewandte Chemie that works in so many odd musical references in the abstracts. There are the usual runs of weak puns - and don't you wish that Nature, among other journals, would consider how unintelligible those jokes are as headlines in their RSS feeds? Lukeward wordplay is the standard of wit for most scientific prose, and I've been guilty of it myself. (I have to say, though, this one rises above the pack, this one is fairly hard to take, and this one and this one definitely cross the pain threshold).
But I wonder how many readers have noted recent references to Mike Oldfield, Ace of Base, Offspring, and even the Sex Pistols? Have the editors noticed, for that matter? (This joke suggests a speaker of American English is at work, since I doubt most Germans have heard of the American Automobile Association).
And I suppose that the "Beer Barrel Polka" and "On Top of Old Smokey" can't be left off this list, either. Nor can other pop-culture name checks to Marvel Comics and the original Star Trek. Someone over there's having a good time. . .
Note: this has been going on for some time. Carbon-Based Curiosities adduces some other examples from about a year ago, including ricochet shots off "My Sharona", Chic, Jimi Hendrix, the Terminator movies, and the X-Files. . .
+ TrackBacks (0) | Category: The Scientific Literature
Chronic fatigue syndrome has long been controversial and mysterious. Is the mystery clearing up, or getting deeper? There have been diagnoses of something like CFS for a long time, under a lot of different names. The common sign is persistent fatigue with no obvious physical cause, often accompanied by joint pain, disrupted sleep, and other symptoms. It's more common in women than in men - but then, so are a lot of autoimmune disorders, which has made some sort of immune syndrome a popular explanation. All sorts of contradictory data have been generated around that idea, but nothing convincing has emerged.
There's a preprint in Science from teams at the National Cancer Institute, the Cleveland Clinic, and Whittemore Peterson Institute that's attracting a lot of interest. It presents evidence for a viral infection which is far more common in patients diagnosed with CFS. What's even more intriguing is that the virus (XMRV, a mouse retrovirus) is already one that's suspected of involvement in some cases of prostate cancer, as shown by analysis of biopsy samples. (Commentary on that work here). About two-thirds of the CFS patients were found to be positive for the virus, as opposed to about three per cent of the control group. The WPI people are now saying that since the manuscript went in that further work has shown 98% of a 300-CFS-patient sample as positive for XMRV. More on that below.
In the case of the prostate patients, there seems to be a link with a deficiency in the RNAse L pathway, which is part of the interferon-induced antiviral response. It may be that patients with this immune system vulnerability are more susceptible to infection by XMRV, which then goes on to cause (or exacerbate) prostate cancer. There may be a link between RNAse L function and a diagnosis of CFS as well. It makes a neat story, and I hope that it's true.
But we're not quite there yet. No one's seen the data yet on that 300-patient cohort mentioned above, and it's not clear if a different diagnostic method was used on them compared to the group in the Science paper. And that paper itself doesn't have enough details on the patients to satisfy some readers - a specialist at the CDC complained about this to the New York Times, and said that his team would try to reproduce the results, but that he wasn't hopeful. (Working on chronic fatigue has not been the sort of thing that breeds a hopeful outlook, to be sure). Other researchers in the field have voiced their doubts to Science (who, to be sure, did accept the original paper).
One of the problems in this area has been defining who's a patient and who isn't. It's a bit of a catch-all diagnosis, or can be, so there's always the suspicion that even if there's a solid underlying cause that the data are hard to dig out of a heterogeneous patient sample. And there's the whole psychological-or-physical question, too, which is a sure route to raised voices and waving fists. My thinking is that there are very likely a number of people with other issues (which I will leave undefined) piled into this area, and that the necessary attempts to draw boundaries will be sure to leave someone upset.
As for this retrovirus angle, there are a number of other steps that need to be taken. Looking over historical blood and tissue samples will be very interesting - could you find that a person showed no sign of the virus when younger, then went positive before showing signs of the disease? Or does it stay latent for a longer period before finally breaking through? Are there animals that are susceptible to infection, and do they show similar symptoms to humans?
Can we at least demonstrate infection of cultured cells in vitro? (Update: I see that they've shown that, which is a very good step). Do any of the existing antiretroviral drugs have any effect on either of those processes, and if so, what happens when you give them to patients with CFS? What about the 3% or so of the population that seems to be positive for XMRV but shows no sign of either prostate cancer or CFS - what's different about them, if anything? And so on.
The Whittemore Peterson Institute people are way out in front on these questions, for better or worse. You may have said, as I did, "Who they?", but it turns out that they've only been around since 2004. The institute was set up by the parents of a CFS patient to do research in the field, and they've apparently been quite busy. Their web site gives the impression that the question of CFS as a retroviral infection is basically settled, but I'm not there yet. I have a lot of sympathy for the unidentified-infectious-agent line of thinking, and I believe that there are probably several things out there that will eventually fit into this category, but it can be a hard thing to prove. Let's hope this one is solid, so we can get to work.
+ TrackBacks (0) | Category: Infectious Diseases | The Central Nervous System
October 12, 2009
I'm taking the day off today (it's a school holiday, so I'll be out doing fall-ish stuff with the family rather than pushing back the boundaries of human knowledge. The frontiers of science will come under fresh assault tomorrow, though (as will the frontiers of science blogging).
I did want to mention, though, that Pharmalot is active again - Ed Silverman has his old domain, and is posting as time allows from his day job over at Elsevier. Welcome back!
+ TrackBacks (0) | Category: Blog Housekeeping
October 9, 2009
There seems to be some finger-pointing going on about conflicts of interest in the scientific and medical literature. According to this piece in Nature Medicine, a recent conference in Vancouver on peer review featured statements such as this:
"We absolutely should not let up on our scrutiny of industry," says Karen Woolley, a co-author of one of the new studies and chief executive officer of the professional medical writing company ProScribe, based in Queensland, Australia. "But why are we always pointing our finger over there? There's an elephant in the room, and that's the nonfinancial conflicts of interest in academia."
I hope that ProScribe wasn't involved in that Australian journal scandal. But even though the head of a medical writing company clearly has a gigantic axe to grind here, the point isn't invalid. Academia has pressures of its own to publish, and lot of shaky stuff gets sent out under them.
Under the auspices of (the Council on Publishing Ethics), (consultant Liz) Wager dug through PubMed files to see how many papers had been retracted between 1988 and 2008. She found 529, and, in a close study of a randomly selected set of 312, she judged that only 28% were due to "honest error". Among the rest, some of the largest chunks were due to authors found publishing the same results more than once (18%), plagiarism (15%), fabrication (5%) and falsification (4%) of data. Taking into account an additional 1% in the 'other misconduct' category, the unethical reasons stacked up to 43%.
Many, perhaps most, of these papers seemed to have been unlikely to have been funded by industry. And there are, of course, plenty of rotten papers out there that never get retracted at all, in many cases because no one reads them or notices that they're a rehash of what someone else has already published. The Deja Vu people are starting to cut into that pile, though, and it's a big one.
There's a danger of all this turning into an exchange of tu quoque arguments between industry, academia, and the publishers. I think there's common ground to agree, though, that all sorts of pressures exist to publish work that shouldn't be published, and that everyone has a common interest in making sure that this doesn't happen. And industry still has a bigger responsibility, since (1) it has more money to cause trouble, if it wants to, and (2) the sorts of things it works on often have more immediate relevance to the outside world. If some obscure faculty member somewhere published reheated work in a series of low-end journals, he's only wasting the time of a limited number of people. A publication involving clinical trial data, though, can send ripples out a lot farther and faster.
+ TrackBacks (0) | Category: Academia (vs. Industry) | The Dark Side | The Scientific Literature
October 8, 2009
Hmmm. As a colleague just pointed out to me, I've spent some time here defending "me-too" drugs. And just this morning (see the previous post) I take off after what can only be described as "me-too reactions", saying that I don't see the use for so many of them.
Well! The only defense I can offer (until I think of a better one) is that there is no drug category so populated as the aldoxime-to-nitrile conversion is in synthetic chemistry (or acetal formation/deprotection, desilylation, or the other categories I spoke of in that other post). I suppose I might have a tougher time standing up for me-too drugs if there were (say) twenty-nine statins on the market. But still. . ."I'd better put up a post on that", I said. "Better you than someone with a funny pseudonym in your comments section", came the reply.
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
Here's a question you don't hear discussed very often: are there some synthetic organic chemistry reactions that don't need any more work? I'm moved to ask this because I just came across yet another way that someone has reported to dehydrate an oxime to a nitrile. (No, I won't link to it. You don't need it. No one needs it).
If asked to count the number of times I have seen new reagents that dehydrate oximes to nitriles, I would be at a total loss to even try to guess. But I've seen it over and over and over. Is it possible that we now have enough ways to do this? And that anyone who is contemplating adding another one to the list should instead go do something else?
I'll vote for that. And there are several other transformations that could go on the same list. That doesn't mean that I think that our existing methods for these are all perfect, or that they couldn't be improved. I mean, even for forming amides, I would like an inexpensive reagent that never fails, even with crappy unreactive hindered coupling partners, works at room temperature in about five minutes, and has a ridiculously simple workup. We don't quite have that, do we? But no one's publishing on coupling reagents like that, because they're rather hard to realize. What we get are a bunch of things that are about as useful as what we have already.
And I agree that it's worth having multiple methods to accomplish the same reaction. I've been saved several times by being able to move down the list and find something that works. But how long should the list be? Eight reagents? Ten? Twenty? At what point should something like this cease to become an acceptable field for human effort?
My first nomination, then, for the Retirement Home for Organic Transformations is aldoxime to nitrile. I am willing to face the rest of my chemistry career with only the monstrously long list of reagent systems we have today for that reaction. Further nominations can be made in the comments - I'll assemble a list for another post.
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
October 7, 2009
This was another Biology-for-Chemistry year for the Nobel Committee. Venkatraman Ramakrishnan (Cambridge), Thomas Steitz (Yale) and Ada Yonath (Weizmann Inst.) have won for X-ray crystallographic studies of the ribosome.
Ribosomes are indeed significant, to put it lightly. For those outside the field, these are the complex machines that ratchet along a strand of messenger RNA, reading off its three-letter codons, matching these with the appropriate transfer RNA that's bringing in an amino acid, then attaching that amino acid to the growing protein chain that emerges from the other side. This is where the cell biology rubber hits the road, where the process moves from nucleic acids (DNA going to RNA) and into the world of proteins, the fundamental working units of a day-to-day living cell.
The ribosome has a lot of work to do, and it does it spectacularly quickly and well. It's been obvious for decades that there was a lot of finely balanced stuff going on there. Some of the three-letter codons (and some of the tRNAs) look very much like some of the others, so the accuracy of the whole process is very impressive. If more proofs were needed, it turned out that several antibiotics worked by disrupting the process in bacteria, which showed that a relatively small molecule could throw a wrench into this much larger machinery.
Ribosomes are made out of smaller subunits. A huge amount of work in the earlier days of molecular biology showed that the smaller subunit (known as 30S for how it spun down in a centrifuge tube) seemed to be involved in reading the mRNA, and the larger subunit (50S) was where the protein synthesis was taking place. Most of this work was done on bacterial ribosomes, which are relatively easy to get ahold of. They work in the same fashion as those in higher organisms, but have enough key differences to make them of interest by themselves (see below).
During the 1980s and early 1990s, Yonath and her collaborators turned out the first X-ray structures of any of the ribosomal subunits. Fuzzy and primitive by today's standards, those first data sets got better year by year, thanks in part to techniques that her group worked out first. (The use of CCD detectors for X-ray crystallography, a technology that was behind part of Tuesday's Nobel in Physics, was another big help, as was the development of much brighter and more focused X-ray sources). Later in the 1990s, Steitz and Ramakrishnan both led teams that produced much higher-resolution structures of various ribosomal subunits, and solved what's known as the "phase problem" for these. That's a key to really reconstructing the structure of a complex molecule from X-ray data, and it is very much nontrivial as you start heading into territory like this. (If you want more on the phase problem, here's a thorough and comprehensive teaching site on X-ray crystallography from Cambridge itself).
By the early 2000s, all three groups were turning out ever-sharper X-ray structures of different ribosomal subunits from various organisms. The illustration above, courtesy of the Nobel folks, shows the 50S subunit at 9-angstrom (1998), 5-angstrom (1999) and 2.4-angstrom (2000) resolution, and shows you how quickly this field was advancing. Ramakrishnan's group teased out many of the fine details of codon recognition, and showed how some antibiotics known to cause the ribosome to start bungling the process were able to to work. It turned out that the opening and closing behavior of the 30S piece was a key for this whole process, with error-inducing antibiotics causing it to go out of synch. And here's a place where the differences between bacterial ribosomes and eukaryotic ones really show up. The same antibiotics can't quite bind to mammalian ribosomes, fortunately. Having the protein synthesis machinery jerkily crank out garbled products is just what you'd wish for the bacteria that are infecting you, but isn't something that you'd want happening in your own cells.
At the same time, Steitz's group was turning out better and better structures of the 50S subunit, and helping to explain how it worked. One surprise was that there was a highly ordered set of water molecules and hydrogen bonds involved - in fact, protein synthesis seems to be driven (energetically) almost entirely by changes in entropy, rather than enthalpy. Both his group and Ramakrishnan's have been actively turning out structures of the ribosome subunits in complex with various proteins that are known to be key parts of the process, and those mechanisms of action are still being unraveled as we speak.
The Nobel citation makes reference to the implications of all this for drug design. I'm of two minds on that. It's certainly true that many important antibiotics work at the ribosomal level, and understanding how they do that has been a major advance. But we're not quite to the point where we can design new drugs to slide right in there and do what we want. I personally don't think we're really at that stage with most drug targets of any type, and trying to do it against structures with a lot of nucleic acid character is particularly hard. The computational methods for those are at an earlier stage than the ones we have for proteins.
One other note: every time a Nobel is awarded, the thoughts go to the people who worked in the same area, but missed out on the citation. The three-recipients-max stipulation makes this a perpetual problem. This is outside my area of specialization, but if I had to list some people that just missed out here, I'd have to cite Harry Noller of UC-Santa Cruz and Marina Rodnina of Göttingen. Update: add Peter Moore of Yale as well. All of them work in this exact same area, and have made many real contributions to it - and I'm sure that there are others who could go on this list as well.
One last note: five Chemistry awards out of the last seven, by my count, have gone to fundamental discoveries in cell or protein biology. That's probably a reasonable reflection of the real world, but it does rather cut down on the number of chemists who can expect to have their accomplishments recognized. The arguing about this issue is not be expected to cease any time soon.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Current Events | Infectious Diseases
October 6, 2009
I've been traveling since Saturday, and have just spent an unplanned night in Atlanta, so things are a bit behind schedule around here. Regular posting will resume here tomorrow, when we'll have news of the Nobel in chemistry for this year. See everyone then!
+ TrackBacks (0) | Category: Blog Housekeeping
October 5, 2009
As many had expected, a Nobel Prize has been awarded to Elizabeth Blackburn (of UCSF), Carol Greider (of Johns Hopkins), and Jack Szostak (of Harvard Medical School/Howard Hughes Inst.) for their work on telomerase. Blackburn had been studying telomeres since her postdoc days in the late 1970s, and she and Szostak worked together in the field in the early 1980s, collarborating from two different angles. Greider (then a graduate student in Blackburn's lab) discovered the telomerase enzyme in 1984. She's continued to work in the area, as well she might, since it's been an extremely interesting and important one.
Telomeres, as many readers will know, are repeating DNA stretches found on the end of chromosomes. It was realized in the 1970s that something of this kind needed to be there, since otherwise replication of the chromosomes would inevitably clip off a bit from the end each time (the enzymes involved can't go all the way to the ends of the strands). Telomeres are the disposable buffer regions, which distinguish the natural end of a chromosome from a plain double-stranded DNA break.
What became apparent, though was that the telomerase complex often didn't quite compensate for telomere shortening. This provides a mechanism for limiting the number of cell divisions - when the telomeres get below a certain length, further replication is shut down. Telomerase activity is higher in stem cells and a few other specialized lines. This means that the whole area must be a key part of both cellular aging and the biology of cancer. In a later post, I'll talk about telomerase as a drug target, a tricky endeavour that straddles both of those topics.
It's no wonder that this work has attracted the amount of attention it has, and it's no wonder either that it's the subject of a well deserved Nobel. Congratulations to the recipients!
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Cancer | Current Events
October 2, 2009
There's been a lot of valuable research into the placebo effect in recent years. That has interest in and of itself, and it also has a practical side. Understanding how people feel better on their own could tell us more about how to make our actual drugs work better, and it could also help us design clinical trials more efficiently. It would be a great help to know accurately how much of a positive effect is due to an investigational drug, without having to run thousands of people to separate that out statistically from a robust (but highly variable) placebo effect.
A new paper in the journal Pain (which has always gotten my vote for "Most To-the-Point Journal Title Possible") sheds some light on this issue, and on the mirror image "nocebo effect". The authors have looked over trials of several migraine drugs. In each case, there was a study arm and a placebo arm, and (since no one knew which group they were in), every patient got the lecture about possible side effects if you were in the treatment group.
The key point is that the migraine trials were investigating three different classes of drugs (anti-inflammatories, triptans, and anticonvulsants), and these three, not surprisingly, have different sets of possible side effects. The patients taking the drugs certainly manifested some of these, but what about the placebo groups?
Well, the placebo groups in the anti-inflammatory trials reported more dry mouth, nausea and vomiting than the placebo arms of the triptan studies. The placebo patients in the anticonvulsant trials, though, had a higher incidence of fatigue, sleepiness, and dizziness than the anti-inflammatory placebo groups reported. In short:
We found specific side effects in the placebo arms of anti-migraine trials when analyzing the three groups of drugs. We observed that the side effects that are expected for the active drug against which the placebo is compared, are also more frequent in the placebo group. In particular, anticonvulsant-placebos appear to have a higher rate of AEs (adverse events) than the other two classes of anti-migraine drugs. . .
. . .Moreover, it is also important to note that a larger number of patients in the anticonvulsant-placebo group discontinued the study (withdrawals due to AEs) than those in the triptan-placebo and NSAID-placebo groups. Both patients’ and experimenters’ expectations may have affected the AEs occurrence in the placebo groups. . .
This sort of thing has been observed before, but this is a particularly neat example. As a researcher (or a patient), it's important to remember that we tend to get what we think we're going to get. And we need to be aware of that, and be ready to correct for it if we have to.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | The Central Nervous System
October 1, 2009
By the way, I just wanted to thank everyone who's been stopping by here. Traffic for last month broke all records: 260,000 visits, 350,000 page views. And that's not too bad for a site that talks about smelly chemicals and the people who work with them! Much appreciated.
+ TrackBacks (0) | Category: Blog Housekeeping
Yesterday's post on citing the patent literature prompts today's. I realize that if you're not used to reading the things, patents can be rather odd and daunting. But there are some rules to follow to let you get useful information out of them. That depends on what you're looking for, though.
If you're searching for preparations of particular chemical compounds, there's an awful lot of that available, although sometimes it's not easy to extract. What you'll probably want to do is skip directly to the experimental section. Ignore the description; ignore the claims - if you want to find out how to make Compound X, you won't find the details you're looking for there.
Problem is, the experimental section will not necessarily be laid out in a user-friendly manner. It doesn't have to be, and in fact, it's sometimes deliberately convoluted. The first thing to make sure of is that you're looking at a real experimental procedure. If it's written in the present tense, and/or with vague details ("A compound of Formula 1 is combined with a base, preferably a tertiary amine, and is heated to between room temperature and 100 degrees C. . ."), then you'd better keep turning pages. This is not what you're looking for. This is a generic or prophetic example, and doesn't necessarily have anything to back it up. Keep going until you hit real amounts of real compounds, and real spectral data. Patent applications, ones that expect to stand up, anyway, have to exemplify the compounds that they're claiming, and that takes real data.
If the patent is written with structures over each each experimental procedure, then your job is a lot easier. Often the preps will be in sequence, one intermediate leading to another, although they don't have to be. Keep paging through until you've found the compound you're after. If the compounds are named in the procedures, a text search for some part of the particular name can save you a lot of time, as can a search for the name of some reagent that you may have seen that the inventors used (from a SciFinder search, for example).
Keep in mind that the particular compound you're looking for may appear as part of a table. Often patents will be written with a detailed general procedure for a particular example, and then there will be a line like "Following this same protocol, the following examples were also prepared. . .", and a table. I know that you'd rather have a detailed individual prep (so would I), but these are still generally pretty reliable. You find this sort of thing for lists of amides or Suzuki coupling products, so it's not too bad.
The amount of spectral data you'll get will vary. Actually, the rule for a patent application is "The more, the better", so a good one will include NMR data, and whatever else they can think of. The minimum is the LC/MS retention time and a few ions, which isn't all that helpful, but satisfies the legal requirements, barely. (Taiwan and a couple of other countries will often balk at that sort of thing, but that doesn't become an issue for a few years).
If you're looking for evidence of new biology or mechanisms of action, that will be a bit trickier. This will show up in the claims, but claim can be structured in a rather Byzantine manner. As with the chemistry, though, if it's something really important, there will be data to back it up. Gels, sequences, purification procedures - all that has to be in there if the company is really serious. If all you can find is a line somewhere about ". . .may also be useful for X Disease" or "as an antagonist of Receptor Y", with no more details, then you can ignore that.
The data for specific compounds will vary quite a bit, too. Everyone likes the sorts of patents that list off the compounds and give you real assay numbers next to them. Unfortunately, many filings take the "A, B, C" approach, where they bin the compounds into a few activity classes. That at least lets you pick the more potent ones out from the lesser ones, but there's an even more egregious practice. That's where there's a description of the assay, ending with a line like "All of the compounds claimed showed activity of at least 10 micromolar under these conditions". That sort of stuff drives me crazy, and I really think it should be legally discouraged. I think that this is gradually disappearing from the world, and speed the day.
If you're looking for the best compound in the whole patent, playing "hunt the clinical candidate", well, that can be a fun game. Sometimes it's clear in the way the claim language narrows down to a handful of "most preferred" compounds. Sometimes you can infer it by going through the experimental section and noting when a bunch of 50-milligram procedures suddenly jump to 25-gram procedures (or more). But if a company really wants to hide their single best compound in a forest of other good ones, it can be done. I once had to dig through a pile of Exelixis kinase patents, looking for structures for their clinical candidates, and after about a week, I concluded that it just couldn't be done. And they're not alone.
+ TrackBacks (0) | Category: Patents and IP