About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
October 29, 2010
The former GSK employee who went to the FDA about quality control problems in their manufacturing has been awarded $96 million dollars for her work (it's calculated as a share of the fine against the company). This breaks all previous records - and you know, I think that's a good thing.
I've written about this sort of thing before, and I continue to think that this is a good law. It takes a tremendous amount of nerve to put your own livelihood at stake to report something that's going wrong (and isn't being fixed). The incentives need to be there. If we were a perfectly altruistic species, any of us would have no problem sacrificing ourselves immediately for the good of the whole. But the very fact that there's such bad conduct to take the risk of reporting on tells you that we're not that sort of species at all.
The case centred on a factory in Cidra, Puerto Rico, where GSK made a range of products including an antibiotic ointment for babies, and drugs to treat nausea, depression and diabetes. In August 2002, Eckard, a global quality assurance manager, led a team sent to the plant to investigate manufacturing violations that had been identified by the US Federal Drugs Administration (FDA). Eckard lost her job nine months later after warning that the problems ran deeper than the FDA realised.
Eckard's lawyers, Getnick & Getnick, said she was made redundant against her will in May 2003 after repeatedly complaining to GSK's management that some drugs made at Cidra were being produced in a non-sterile environment, that the factory's water system was contaminated with micro-organisms, and that other medicines were being made in the wrong doses. . .
. . .Eckard tried to alert GSK's management to the situation in Cidra even after she left the company. According to the lawsuit brought by Eckard, she tried to call GSK's chief executive JP Garnier in July 2003, but he declined to speak to her. She took her concerns to the FDA in August 2003 after concluding that GSK's compliance department lacked the authority to address her concerns.
I'm not enough of a libertarian to think that the market will take care of all such behavior without an extra possibility of punishment backing it up. I think that we really do need regulatory authorities (although we can argue the details after that statement!), in the same way that we really do need police forces. Both of those groups can (and do) abuse their authority at times, but both of them also provide a much-needed function, human nature being what it is.
And the nature of big organizations being what it is, too. "Never explain by malice what can be explained by stupidity" is a pretty good rule, and in a large company, you can add inertia, backside-covering, careerism, and deciding that a given mess is someone else's problem. The bigger a company, the more chances there are for these things to happen. Perhaps the possibility of a $750 million dollar fine will help to concentrate attention in such cases - and if not, well, how about a billion? Try for two?
+ TrackBacks (0) | Category: Regulatory Affairs
October 28, 2010
I'm enjoying myself very much in the lab today, doing something I haven't done in 20 years: photochemistry. I did some during my post-doc (with Bernd Giese, which is also the last time I've done free radical chemistry, at least on purpose). Since then, though, it's one of those things that's never come up. We had a mercury lamp apparatus in my grad school group, which I saw used a few times - one of which resulted in one of those nose-wrinkling "What's that funny smell?" moments, when the person running it forgot to turn on the cooling water. Don't do that. Medium-pressure mercury lamps can get pretty toasty. (They'll also permanently tan your eyeballs if you're so foolish as to look at them, I should also note, so don't do that, either!)
Most synthetic chemists will have had a brief experience with the technique - it's very appealing to think of doing chemistry just by shining a light on the reaction. But there can be a lot of variables - the sort of lamp you use (and thus the wavelengths and energy flux), various filters, sensitizing additives, hardware setups. Many people find that they use it for one reaction at some point, to make a specific compound, and never quite find a use for it again. In my experience, every decent-sized chemistry department has a photochemical rig of some sort, and no one quite knows where all its parts are.
That's probably a shame. There are a lot of unusual and interesting reactions that can be done photochemically - if you like 3- and 4-membered rings, this is certainly a field you should look into. I can recommend this recent bookas a general review of the field, for anyone who's thinking about it. We'll see how much use I get out of my current setup, but for now, I'm happily blasting away with the ultraviolet. . .
Update: blasting away is right! My cooling water dribbled down and then cut out on me after I tried to turn it down a bit, and, well. . . now I'm cleaning melted goo off of the quartz. A razor blade is working pretty well, but that's no way to treat a working piece of equipment.
+ TrackBacks (0) | Category: Life in the Drug Labs
A reader forwards an e-mail from Harris Interactive, a marketing research firm that says that it's running a survey on membership in the American Chemical Society. The reason he sent it along, though, is that it looks rather odd. The subject line of the message is three lines of gibberish, and it offers $150 for participation, which seems rather high for a survey company sending out random emails.
If this is something the ACS has commissioned, well, they're (a) probably spending too much money on it, and (b) should realize that the message is triggering the mental spam filters of its recipients. And if it's not the ACS, then who the heck is it? Any ideas?
+ TrackBacks (0) | Category: Chemical News
October 27, 2010
Since graphene was worth a Nobel prize this year, it's only fitting that I mention a recent application of it in chemical synthesis. A paper in Angewandte Chemie shows how graphene oxide can be used as an oxidizing reagent for organic compounds. It performs primary alcohol-to-aldehyde, secondary alcohol-to-ketone, and alkyne-to-methylene ketone reactions quite well. This doesn't seem to be due to residual metals, but is a reaction of the graphene oxide (GO) itself, which is probably a complex mixture of epoxides and who-knows-what on the carbon surface.
Interestingly, it appears that the GO can be regenerated by atmospheric oxygen as the reaction goes on (and then re-used_, so in the end, these processes are being performed by the oxygen itself. This could be an appealing method for scaleup, since it drastically reduces some possible waste streams. The turnover isn't as high as with some more traditional oxidants, but the cost might be hard to beat.
The first thing I thought of was using this material in a flow reactor, perhaps with occasional bubbling of oxygen into the solvent stream. It seems likely that as we learn to manipulate the surfaces of such materials that we'll find some very useful catalysts. . .
+ TrackBacks (0) | Category: Chemical News
I had trouble believing this headine today, but it's a real one. A convicted murderer was set to be executed in Arizona, but there's apparently been a shortage of sodium thiopental, which (I have to say) I didn't know was the preferred drug for this use. The Arizona authorities imported some from Great Britain, whereupon the convicted man's lawyers got a stay of execution, on the grounds that this particular material had not been FDA-approved.
Well, that's a new one. The idea that a drug being used to kill someone has to be properly evaluated for safety and efficacy is not one that would have occurred to me, but then, I'm not a lawyer. Thiopental, I should add, is not exactly an experimental drug. It's a short-acting barbituate that's been around forever as an anaesthetic. It has one supplier in the US, but can be sourced, no doubt, from many others around the world. And thus this Arizona case. I notice that many of the news stories refer to use of a "non-approved drug", but that should be more properly stated as a non-approved supplier of a drug that's been around forever, and (moreover) used a great number of times in executions and euthanasia.
This argument held things up, briefly, but the Supreme Court last night tossed that one out 5 to 4, and the prisoner involved (I've no desire to use his name) was executed. Readers from countries without the death penalty may well find this whole situation grotesque - well, you can be sure that many people inside the US do as well. My biggest problem, though, is that the prisoner involved committed his crime in 1989 and is only now paying this price for it.
Update: Here's the Supreme Court order in this case (PDF). The interesting passages:
. . .There is no evidence in the record to suggest that the drug obtained from a foreign source is unsafe. The district court granted the restraining order because it was left to speculate as to the risk of harm. . .But speculation cannot substitute for evidence that the use of the drug is “‘sure or very likely to cause serious illness and needless suffering.’” . . .There was no showing that the drug was unlawfully obtained, nor was there an offer of proof to that effect.
+ TrackBacks (0) | Category: Current Events
October 26, 2010
Get ready for the life-extension folks to jump on this one: there's a report out in PNAS that the longtime treatment for leprosy (Hansen's disease), diaminodiphenylsulfone (DDS or dapsone), also prolongs life in the nematode C. elegans.
We seem to be talking about nematodes a lot around here recently. The authors of the current study (from Korea) got around the dosing problem by feeding DDS to bacteria, and them feeding those to the nematodes. (When you can get away with that, it seems like the most reliable way of getting drugs into the little beasts). The nematodes accumulated the compound up to about 5 mg/kilo body weight - although I have to say, a kilo of nematodes is a rather alarming thought. The treated animals showed a significantly longer lifespan, faster body movements compared to untreated controls, and a delay in accumulating the "aging pigment" lipofuscin.
Now, DDS kills bacteria by inhibiting folate synthesis, but that doesn't seem to have anything to do with lifespan extension. The authors found that one of its key targets might be pyruvate kinase - and this might be the source for the mild anemia that's sometimes seen as a side effect in human patients. Nematodes have two isoforms of the enzyme, one mostly in muscle, and the other mostly in the digestive tract. Further study (with RNAi, etc.) showed that the lifespan extension seems to be working through the former, but not the latter. But it also showed that this probably can't be responsible for the whole lifespan effect, either: mutant nematodes with that isoform deleted live longer than wild type, but treating them with DDS makes them live longer still.
The authors point out that dapsone has been used in humans for a very long time, and that there's a 5% gel that's been shown to be safe for long-term treatment (and which reaches the blood levels that you'd think would be sufficient). They finish up by saying: "We suggest that is is worthwhile to examine whether DDS is effective in enhancing longevity in humans as well." There are enough people interested in these things that I think that this will be tried out very shortly, probably starting this week, albeit in a rather uncontrolled manner. . .
+ TrackBacks (0) | Category: Aging and Lifespan
Earlier this year, I wrote here about using calorimetry in drug discovery. Years ago, people would have given you the raised eyebrow if you'd suggested that, but it's gradually becoming more popular, especially among people doing fragment-based drug discovery. After all, the binding energy that we depend on for our drug candidates is a thermodynamic property, and you can detect the heat being given off when the molecules bind well. Calorimetry also lets you break that binding energy down into its enthalpic (delta-H) and entropic (T delta-S) components, which is hard to do by other means.
And there's where the arguing starts. As I mentioned back in March, one idea that's been floating around is that better drug molecules tend to have more of an enthalpic contribution to their binding. Very roughly speaking, enthalpic interactions are often what med-chemists call "positive" ones like forming a new hydrogen bond or pi-stack, whereas entropic interactions are often just due to pushing water molecules off the protein with some greasy part of your molecule. (Note: there are several tricky double-back-around exceptions to both of those mental models. Thermodynamics is a resourceful field!) But in that way, it makes sense that more robust compounds with better properties might well be more enthalpically-driven in their binding.
But we do not live in a world bounded by what makes intuitive sense. Some people think that the examples given in the literature for this effect are the only decent examples that anyone has. At the fragment conference I attended the other week, though, a speaker from Astex (a company that's certainly run a lot of fragment optimization projects) said that they're basically not seeing it. In their hands, some lead series are enthalpy-driven as they get better, some are entropy-driven, and some switch gears as the SAR evolves. Another speaker said that they, on the other hand, do tend to go with the enthalpy-driven compounds, but I'm not sure if that's just because they don't have as much data as the Astex people do.
So as far as I'm concerned, the whole concept that I talked about in March is still in the "interesting but unproven" category. We're all looking for new ways to pick better starting compounds or optimize leads, but I'm still not sure if this is going to do the trick. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | Life in the Drug Labs
October 25, 2010
I had an email this morning asking me to settle a bet on lab technique. I'm not sure I know the answer myself, so I figured I'd throw the question out to the readership.
So here goes: in your vacuum cold trap, which I'll assume is cooled by dry ice (and not liquid nitrogen, for the most part), what solvent do you use: acetone or isopropanol? (If you use something else, feel free to add it to the list, but I think you'll be in a distinct minority). As for me, I used acetone back in grad school, but switched over to isopropanol years ago, because I didn't have to change it (or add to it) so often.
+ TrackBacks (0) | Category: Life in the Drug Labs
Mat Todd of the University of Sydney looks over the SciFoo conference that we both attended during the summer, and contrasts that to an ACS meeting. The comparison isn't kind, as you'd imagine:
. . .with a few very notable exceptions the talks I saw were a) presented in a dull Powerpoint-heavy series of slides with verbal commentary about what was on the slides where even the presenter was visibly bored with what they were saying and b) on published material that was c) way too predictable and incremental. So both the presentational style and the content were disappointing. So many talks at the ACS would have been more interesting if the speaker had simply given out paper copies of their latest paper and given us 10 minutes to read it in silence then 10 minutes to talk about it. Now of course specialism necessitates incrementalism in content, but it’s no good if the meeting becomes a chore to sit and listen to. Nor is it good if the talks come out of the Powerpoint Machine (the genius of the “Chicken Talk” is that you can kind of follow the talk structure without listening to the content – it sounds exactly like most academic talks right up to the last supplementary slide in response to the second question at the end). In maybe 80% of the talks I attended nobody asked questions, or nobody was allowed to, or people asked “pity questions” just to break the awkward silence, but which were in no way interesting in themselves.
"A chore" is exactly what I find too many presentations and conferences to be, unfortunately. If we limited presentations, as Mat suggests, to people who are excited about their results, we'd have a lot of short meetings in this field. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
Arena released their complete response from the FDA over the weekend, regarding the non-approval of their weight loss drug Lorcaserin. And the arguing has already started about just how bad the news is. There are several levels that this process could be tracking on, and we just don't know which one it's on yet.
And the varied regulatory paths that result give you answers to "When will the drug be approved" ranging from "Maybe in six months" through "Maybe in a year" on out to "Maybe in a few years", which at that point shades into "Never". One of the main sticking points is the carcinogenicity data from the animal studies - the FDA is worried, and they want Arena to round up some outside experts to go over the data to address their concerns. Problem is, we don't quite know what that means. It could be anything from "Have some people assure that FDA that everything's actually fine" (the Arena bull position) to "Go run a bunch more long clinical trials" (which is one of the bear positions). I think it's unlikely that the FDA will let the company go through without at least running more rodent studies; I just can't see an outside review of the data doing enough to calm them down. The agency, I believe, is in more of a "Get some people to help you design some good studies" mode.
Matthew Herper's take seems reasonable to me. As he points out, even when companies have gotten a drug through after one of these Complete Response Letters, it's taken at least seven months when the issues didn't involve the clinic. He seems to be taking flak from Arena investors who have loarcaserin penciled in for somewhere around Valentine's Day. But I don't see how that's going to happen, either. Try April Fool's - of some other year.
+ TrackBacks (0) | Category: Diabetes and Obesity | Regulatory Affairs
October 22, 2010
Well, the latest for 1960, anyway. That's the Bruker KIS-1 NMR machine there, folks, operating at 25 MHZ, and ready to dim the lights in the whole building when you switch on that electromagnet. Allow about 12 hours of acquisition time to get a decent spectrum.
For those of you outside the field, a 300 MHZ NMR machine is now considered a average workhorse instrument, and should give you a spectrum (with resolution that would have made someone back then faint with joy) in a minute or so of acquisition time. We can do things with modern machines that they wouldn't have even dreamed of back in 1960, and people are still thinking up new tricks. All hail NMR!
+ TrackBacks (0) | Category: Analytical Chemistry
Here's something that you don't think about until you actually work in a department full of chemists: how do you keep track of who's got what, and where it is? Everyone has reagents on their bench, and hidden away under the fume hood, and they're ordering more (and using up the current bottles) all the time. And people are wandering from lab to lab, borrowing and pilfering, sometimes when the original owners are there, and sometimes not. So how do you know what you have?
I've seen a number of approaches to this chemical inventory problem. The essential thing is that every bottle of every reagent be trackable. That means some sort of bar-coding system, most likely. Those bar codes need to go on when the compounds come in the door, ideally, so there aren't a lot of invisible reagents floating around. I think the best way to do this is to have the shipping and receiving people involved - if you trust the chemists to bar-code things, many of them just won't quite get around to it.
The next big question is whether you're going to have a centralized chemical stockroom or not (I've worked under both systems). The stockroom probably makes it easier to keep track of things, in general, since otherwise the available reagents are distributed throughout the labs at all times (instead of the ten per cent or so that are actually in active use). And it helps to have some place to send all those bottles back to - when you clean up your bench, you know that there's one thing you can do immediately, which helps keep the chemicals homing back to the central location.
A stockroom, though, requires dedicated space and dedicated head count, and neither of those are always feasible. The spread-throughout-the-labs approach puts the work back on the chemists. Its biggest disadvantage is entropy: bottles move around, get silently consumed, or get just plain lost. (That happens with a stockroom system, too, but at a slower rate). After a while, your map of the chemical inventory is useless - and for popular reagents, "a while" might be about two weeks.
That brings up the moving-chemicals problem, and to be honest, I've never seen a good solution to that one. Ideally, any time a person borrows some reagent from its known location, they scan the bar code so the system knows that it's moved. In practice, you know, you're just using it for a couple of days. Or you're just running one reaction, and you're going to take it right back. It's just right down the hall; the folks down there know where it is. Right. A stockroom system keeps this from randomizing things as quickly, but no matter what, this sort of Brownian motion is going to scramble things eventually.
So there has to be a regular inventory taken, no matter whose system you're using. Whether that's someone from the stockroom coming through and scanning all the benches and cabinets, or whether you declare Inventory Day and make all the chemists do it themselves, it has to be done. Twice a year is not too often, in my experience.
If anyone has solutions to some of these problems that I haven't touched on, feel free to share them in the comments. But please, no "Just Make Everyone Act Responsibly For Once" recommendations. Let's assume that people are intrinsically looking for the easy ways out, and work from that - it's a worldview that has never disappointed me.
+ TrackBacks (0) | Category: Life in the Drug Labs
October 21, 2010
There's a headline I've never written before, for sure. A new paper in PNAS describes an assay in nematodes to look for compounds that have an effect on nerve regeneration. That means that you have to damage neurons first, naturally, and doing that on something as small (and as active) as a nematode is not trivial.
The authors (a team from MIT) used microfluidic chips to direct single nematodes into a small chamber where they're held down briefly by a membrane. Then an operator picks out one of its neurons on an imagining screen, whereupon a laser beam cuts it. The nematode is then released into a culture well, where it's exposed to some small molecule to see what effect that has on the neuron's regrowth. It takes about 20 seconds to process a single C. elegans, in case you're wondering, and I can imagine that after a while you'd wish that they weren't streaming along quite so relentlessly.
The group tried about 100 bioactive molecules, targeting a range of known pathways, to see what might speed up or slow down nerve regeneration. As it happens, the highest hit rates were among the kinase inhibitors and compounds targeting cytoskeletal processes. (By contrast, nothing affecting vesicle trafficking or histone deacetylase activity showed any effect). The most significant hit was an old friend to kinase researchers, staurosporine. Interestingly, this effect was only seen on particular subtypes of neurons, suggesting that they weren't picking up some sort of broad-spectrum regeneration pathway.
The paper acknowledges that staurosporine has a number of different activities, but treats it largely as a PKC inhibitor. I'm not sure that that's a good idea, personally - I'd be suspicious of pinning any specific activity to that compound without an awful lot of follow-up, because it's a real Claymore mine when it comes to kinases. The MIT group did check to see if caspases (and apoptotic pathways in general) were involved, since those are well-known effects of staurosporine treatment, and they seem to have ruled those out. And they also followed up with some other PKC inhibitors, chelerythrine and Gö 6983, and these showed similar effects.
So they may be right about this being a PKC pathway, but that's a tough one to nail down. (And even if you do, there are plenty of PKC isoforms doing different things, but there aren't enough selective ligands known to unravel all those yet). Chelerythrine inhibits alanine aminotransferase, has had some doubts expressed about it before in PKC work, and also binds to DNA, which may be responsible for some of its activity in cells. Gö 6983 seems to be a better tool, but it's is in the same broad chemical class as staurosporine itself, so as a medicinal chemist I still find myself giving it the fishy eye.
This is very interesting work, nonetheless, and it's the sort of thing that no one's been able to do before. I'm a big fan of using the most complex systems you can to assay compounds, and living nematodes are a good spot to be in. I'd be quite interested in a broader screen of small molecules, but 20 seconds per nematode surgery is still too slow for the sort of thing a medicinal chemist like me would like to run - a diversity set of, say, ten or twenty thousand compounds, for starters. And there's always the problem we were talking about here the other day, about how easy it is to get compounds into nematodes at all. I wonder if there were some false negatives in this screen just because the critters had no exposure?
+ TrackBacks (0) | Category: Biological News | Drug Assays | The Central Nervous System
Remember the Plavix Confusion of 2006? That's when Canadian generic company Apotex managed to jump onto the market for a few weeks with its own version of the BMS/Sanofi-Aventis blockbuster. It was always a bit unclear whether they had the right to do that - there was a case that the company had played rough but fair with some tricky language in their agreements with the two pharmas. Still, Apotex racked up over 800 million dollars in sales while everyone was sorting that out.
Well, four years down the road, a judge has ruled that Apotex owes BMS and S-A damages for their adventure: half the sales, plus interest. That's still less than the triple damages that could be obtained in an open-and-shut case of patent infringement, but it's pretty substantial. I wonder how much of the money Apotex still has handy?
+ TrackBacks (0) | Category: Cardiovascular Disease | Patents and IP
October 20, 2010
This paper in Nature Reviews Cancer is getting more attention in the popular press than most papers in that journal manage. Titled "Cancer: an old disease, a new disease or something in between?", it goes over the archaeological evidence for cancer rates in ancient populations, and goes on to speculate whether the incidence of the disease is higher under modern conditions.
I'd be interested in knowing that, too, but the problem seems to be that there's not much evidence one way or another. The authors concentrate on the evidence that can be found in bone samples, since these are naturally the most numerous, but it seems to be quite hard to get any meaningful histology data from ancient bone tissue. As for other tissue, the Egyptian record is probably the most statistically robust, thanks to deliberate mummification and the desert conditions, but even that isn't too definitive. The Greeks definitely described metastatic tumors, though (and in fact, gave us our name for the disease).
Still, they believe that the archaeological record indicates a smaller incidence of cancer than you'd expect, although given the long list of confounding factors they present, I'm not sure how sturdy that result is. One of the biggest of those is the shorter life expectancies in ancient populations, and it's not easy to get around that one. As the authors themselves point out, working-class ancient Egyptians seem to have mostly died at ages between 25 and 30 (!), and there aren't many forms of cancer that would be expected to show up well under those demographic conditions. (Osteosarcoma is the main tumor type the authors look for as being not so age-dependent).
The paper itself is fairly calm about its conclusions:
Despite the fact that other explanations, such as inadequate techniques of disease diagnosis, cannot be ruled out, the rarity of malignancies in antiquity is strongly suggested by the available palaeopathological and literary evidence. This might be related to the prevalence of carcinogens in modern societies.
But the press reports (based, I think, partly on further statements from the authors) haven't been. "Cancer Is A Modern Disease", "No Cancer In Ancient Times" go the headlines. (Go tell that last one to the ancient Greeks). And it's impossible to deny the environmental causes of some cancers - I'll bet that lung cancer rates prior to the introduction of tobacco into the Old World were pretty low, for example. Repeated exposure to some industrial chemicals (benzene and benzidine, right off the top of my head) are most definitely linked to increased risk of particular tumor types.
So in that way, modern cancer incidence probably is higher, at least for specific forms of the disease. But (as mentioned above) the single biggest factor is surely our longer lives. Eventually, some cells are going to hit on the wrong combination of mutations if you just give them enough time. And the widely reported statement from Professor Davids, one of the paper's authors, that "There is nothing in the natural environment that causes cancer", is flat-out wrong. What about UV light from the sun? Aflatoxins from molds? Phorbol esters in traditional herbal recipes?
That statement strongly suggests a habit of mind that I think has to be guarded against: the "Garden of Eden" effect. That's the belief, widely held in one form or another, that there was a time - long ago - when people were in harmony with nature, ate pure, wholesome natural foods (the kind that we were meant to eat), and didn't have all the horrible problems that we have in these degenerate modern times. (You can see a lot of Rousseau in there, too, what with all that Noble Savage, corrupted-by-modern-society stuff).
This 1990 article (PDF) by Bruce Ames and Lois Gold, "Misconceptions on Pollution and the Causes of Cancer" is a useful corrective to the idea that modern environments cause all cancers. You'll have to guard yourselves, though, against the prelapsarian Golden Age fallacy. It's everywhere.
+ TrackBacks (0) | Category: Cancer | General Scientific News
October 19, 2010
How reliable is the medical literature, anyway? This profile of John Ioannidis at The Atlantic is food for thought. Ioannidis is the man behind the famous "Why Most Published Medical Findings Are False" paper a few years ago, and many others in the same vein.
The problems are many: publication bias (negative findings don't get written up and reported as often), confirmation bias, and desire to stand out/justify the time and money/get a grant renewal. And then there's good old lack of statistical power. Ioannidis and his colleagues have noted that far too many studies that appear in the medical journals are underpowered, statistically, relative to the claims made for them. The replication rates of such findings are not good.
Interestingly, drug research probably comes out of his analysis looking as good as anything can. A large confirmatory Phase III study is, as you'd hope, the sort of thing most likely to be correct, even given the financial considerations involved. Even then, though, you can't be completely sure - but contrast that with a lot of the headline-grabbing studies in nutrition or genomics, whose results are actually more likely to be false than true.
Ioannidis's rules from that PLoS Medicine paper are worth keeping in mind:
The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
And although he's talking about the published literature, these things are well worth keeping in mind when you're looking at your own internal data in a drug discovery project. Some fraction of what you're seeing is wrong.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | The Scientific Literature
October 18, 2010
This year's Nobel for palladium-catalyzed coupling reactions highlighted how useful these have become. But what every practicing organic chemist knows is how complicated they can be, particularly when you first couple of favorite recipes don't work. I've long thought that almost any metal-catalyzed transformation can be optimized, if you're just willing to devote enough of your life to it. But you have to have a good reason to wade into the swamp, because there sure are a lot of variables that can be tweaked. Here's a good case in point, recently published in Organic Letters. A perfectly reasonable reaction (C-H arylation of a chloropyrazole, which had been demonstrated before) was run through the statistical wringer to track down the best conditions.
They looked at 6 solvents, 10 bases, 4 catalysts, 5 ligands, and 4 additives, which would give you 7200 combinations if you ran the whole shebang. A Design of Experiments approach cut the number of actual runs down significantly, and then (fortunately) some of the variables turned out to be pretty insensitive. So this one wasn't as bad as some of them get - the ligand didn't seem to have too much effect, for example, whereas in some other Pd couplings it's crucial. (The choice of base had a much bigger effect, in case you're wondering). Their best set of conditions seems to work reasonably well across a range of possible substrates.
DoE is worth a post of its own, and that'll be a timely thing for me. After brushing up against it for years, I may finally have a use for the technique soon. For those who don't know it, it's basically a way to figure out how to most efficiently sample "experiment space", by getting the most information out of each different run. And then you use principal components analysis (or something similar) to see what the most important changes were, and how they correlate to each other. It's asking, mathematically, what a synthetic chemist wants to know about a complicated reaction recipe: what changes are responsible for most of the variation in the results, and how can I track them down by running a reasonable number of experiments? In the drug industry, process chemists think about this sort of thing a lot more than discovery chemists do, but it's worth keeping an eye out for any time the approach could be helpful.
+ TrackBacks (0) | Category: Life in the Drug Labs
Financially, maybe not as well as you'd think. Ask Martin Chalfie, one of the fluorescent-protein Chemistry prize winners from 2008. . .
+ TrackBacks (0) | Category: General Scientific News
We're going to have to wait to find out of the whole Schering-Plough-buys-Merck charade is really going to allow revenue from J&J's Remicade to stay with the merged company. That merged company is known as "Merck", of course, and is run by Merck people from Merck's headquarters, so it's going to be interesting to see how that dispute goes. But although the first arguments have been made before an arbitration board, the decision doesn't look to be made until sometime next year. It sounds as if Merck is already trying to lower expectations, though. . .
+ TrackBacks (0) | Category: Business and Markets
October 15, 2010
After reading this piece on Chembark, I find that I have to help defend the whole chem-blogging enterprise. The latest Analytical Chemistry has an editorial from Royce Murray, on the subject. Unfortunately, it sounds like something written several years ago. He lays out the current model of scientific publishing, and along the way, briefly defends journal impact factors. Then he says that the lay public has traditionally received news of scientific discoveries through reports in magazines and newspapers, but that the finances in those areas has produced a "shrinkage" of the flow of reliable information to the public. And now we come to the part that really worries him:
In the above light, I believe that the current phenomenon of “bloggers” should be of serious concern to scientists. Bloggers are entrepreneurs who sell “news” (more properly, opinion) to mass media: internet, radio, TV, and to some extent print news. In former days, these individuals would be referred to as “freelance writers”, which they still are; the creation of the modern non-word “blogger” does not change the purveyor. An essential change is that these new freelancers, with the megaphone of the internet, can reach a much larger audience of potential clients than was possible in the past (and harness free “information sources”). . .blogging “agencies” are popping up that openly advertise “no formal qualifications are necessary” (as an internet search for “qualifications of bloggers” revealed). Who are the fact-checkers now? There are no reviewers in a formal sense, and writing can be done for any purpose—political, religious, business, etc.—without the constraint of truth.
He goes on to bemoan the lack of a system to qualify and fact-check these "bloggers" - I like the quotation marks, by the way - maybe we should stick those around every word that entered the language less than ten years ago, just to be sure. Now, it would be easy for me to spend a few paragraphs in the same mode as that last sentence, and as Murray accurately notes, there's no editor to stop me. But I won't, because some of his worries are well-founded.
There is indeed a lot of inaccurate nonsense on the internet. And everyone should read what they find online with a thought to who's written it, and why. But everyone should do the same with stuff that's printed on flattened sheets of dead trees, too, even if there are flattened-dead-tree-sheet editors and fact checkers. This is no place to list the stories that have been horribly messed up by even the most respectable of the old media. I'm thinking of a good list right now; any well-informed person should be able to. (If you can't, you're not as well-informed as you think you are). And there is indeed a lot of good science reporting in newspapers and magazines, although we can't ignore the fact that there's an awful lot of lazy and sloppy science reporting, too.
But there's a lot of inaccurate nonsense in the peer-reviewed literature, too. Without editors and reviewers there would surely be more, but too much junk gets through as it is. And if you want to see that stuff flagged, you'll do well to read the chemistry blogs. Murray's editorial doesn't seem to note that the most widely read ones are all written by chemists, not these unqualified people he's worried about. Would it be a cheap shot to point out that some editing and fact-checking might have caught that point before the editorial went out? Or (the other way around) to point out that a quick look through the scientific blogging world would have done just as well?
This, to me, is the real change that blogging has wrought, and I think it's for the better. Now, anyone who has the desire and ability can write about what they really know, about what they do for a living, and find an audience. I am most definitely not making a living as a journalist. My blog is a useful sideline to my real job, which is drug discovery. When I started in this industry, there was no way for me to self-publish my thoughts about it, but now there is, and I couldn't be happier about that.
Murray is suffering, I think, from a mental block, one that comes from his experience of journalism over his lifetime. He (and many other people) seem to feel that reporting is some sort of special profession, and that "real" journalists are the ones who write for the "real" news outlets. And it's a world where everyone knows which ones those are. It was fairly easy to believe that during the last half of the 20th century, but not so easy before it. (Or, as we're seeing, afterwards). All kinds of scruffy, opinionated people used to run newspapers in this country, and now they have internet sites. As do scientists, professors, lawyers, and anyone else with a keyboard and the desire. They're writing "for any purpose", just the way that Murray is worried about. And it's great.
A couple of postscripts: I should point out that I never would have seen Murray's piece at all had it not been for reading the scientific blogs - I hope that gives him some food for thought. He should also check out the above-linked Chembark post, as well as this response from Terra Sigillata (on an ACS site, no less), this one at InSightU, this one at Science 2.0, and this one at Cephalove. There will surely be more. Quite a lot of discussion! And it would be worth wondering how much of an impact this editorial would have had if it had only appeared in the print version of the journal, instead of being picked up by the Blogging Hordes. Could it be that he knows much more about the internet than he seems to, and he's successfully trolled us all?
+ TrackBacks (0) | Category: The Scientific Literature
October 14, 2010
That would be my reaction if asked to take a look at the structures in this new paper in JACS. As the authors, who tiptoe gingerly every morning into the State Key Laboratory of Explosion Science and Technology in Beijing put it:
"The larger the number of directly linked nitrogen atoms, the more difficult the compound is to synthesize. The difficulties in synthesizing and handling polynitrogen compounds are a direct consequence of their high endothermicities; a further complication is the almost complete absence of methodology for preparing such compounds."
Every word of that is true, and doesn't it just sound appealing? Here, go invent some completely new chemistry in order to make some compounds that are just trembling with the desire to explode. And if the reactions don't work? No prob: all the side products will probably be horribly explosive, too. Good luck determining just which one of them it was that demolished your hood!
But hold on. At first glance, this structure is terrifically unappealing, unless your chemical sensibilities are bent the right way, in which case, there's not much hope for you. The beast has eight nitrogens in a row, which I believe ties the current record. What's startling about the compound is that it's weirdly stable: it doesn't decompose until nearly 194 degrees C, which is quite bizarre. You'd think, by looking at it, that it would hop up and do its big death scene at about one-tenth that temperature. I mean, I've made potential drug candidates that fell apart at lower temperatures than that. (The amount of electron delocalization this compound has probably keeps its personality from coming through).
The other odd thing about this one is that it changes color on exposure to light. That central double bond will flip around to cis instead of trans, which changes the color of the crystals from yellow to blue. (I remember making a photochromic compound of this sort in an undergrad experiment, which I believe was some sort of Chichibabin pyridine thingie; it sure as heck wasn't this!) Exposing this sort of structure to UV light also isn't the first thing I'd want to do, either - the fact that it'll reversibly go through a transition like that also points to its mild, friendly nature.
But heck, we can fix that: hang some nitro groups off of it, guys! Put some more nitrogens in those rings! Go for the record! Surely the State Key Laboratory of Explosion Science can make some compounds that, you know, explode. Look, guys, I've had Chinese colleagues that seemed to have no problem making things that blew up. (To be sure, I've known people from a number of different backgrounds who had that talent; it springs up everywhere) So I know that you can do it.
I can't even decide whether to put this in the "Things I Won't Work With" category at all, since it looks like I could not only work with it, but beat on it with a ball-peen hammer. What kind of polyaza compounds are people turning out these days, anyway?
+ TrackBacks (0) | Category: Things I Won't Work With
Update: here's a trip report on this conference over at Practical Fragments
I'm back from Philadelphia and the FBLD conference. I'm not going to put a trip report up on the blog - although I'm certainly writing one up for my colleagues at work - but a number of people at the meeting asked me what I might say about it here.
Well, I enjoyed it. I tend to like more focused conferences like this one, anyway, where most of the people doing the best work in a field can attend what's still a fairly small meeting. It probably helps that this isn't a very old series of meetings, too. Over time, some sort of scientific entropy sets in, and the topics covered can begin to smear out a bit. Some of the longer-running Gordon Conferences are (to me, anyway) a little blurry about what they're trying to cover.
That same tendency can affect individual talks. We medicinal chemists are particularly guilty of that, since our discipline spreads over a pretty wide area. At a meeting like this one, which was all about fragment-based techniques, people had to resist the temptation to keep going past the fragment-based parts of their talk. Once you get up toward 400 molecular weight, you're not talking about fragments any more - you're doing good old medicinal chemistry. Maybe it's structure-based at that point, maybe not, and maybe you're using some of the biophysical techniques that help out with moving fragment leads forward - but the fragment techniques are what got you to that point, not what's carrying you forward through the concerns about PK, formulations, polymorphs, and all the other later-stage worries of a drug program.
The speakers at this meeting generally did a good job avoiding this pitfall, but I have to admit that the few times I saw PK data come up on the screen, I stopped taking notes. I didn't stop listening, on the chance that there might be something interesting, but it certainly wasn't what I was there to focus on. One could imagine a whole meeting about solving PK problems in drug development - there probably is one, actually. But at that one, you'd have to make sure that the speakers didn't spend time telling you about the neat fragment-based techniques that led to their drug candidate.
As I said, though, there were a lot of interesting speakers at this one, and not a single talk was anything close to a complete waste of time. How many meetings can you say that about? Things ran smoothly, and with notably better food than some of the other conferences I've attended. Some meetings just pitch a bunch of Wissenschaftlerfutter out onto the tables, figuring that people will deal with it - and to be honest, they're usually right. We'll eat most anything in this field, although I've been told that physicists are even less discriminating, so at least we have that.
+ TrackBacks (0) | Category: Life in the Drug Labs | The Scientific Literature
October 13, 2010
For those who haven't seen it, I heard Adam Renslo of UCSF present this work yesterday. His group was looking for inhibitors of cruzain, a target for Chagas' disease, which is certainly a worthy cause (and a tough target). They found a series of oxadiazoles, which are, to be sure, rather ugly (but no uglier than a lot of chemical matter you see in the kinase field, among others). They had affinity, they had reasonable SAR, and the team drove the potencies down against the target. . .only to find, late in the game, that it was all an illusion.
These compounds are aggregators. (That link takes you to a post about a follow-up paper from the UCSF folks, covering this debacle and others). What's striking is that this artifact (compound aggregation under the assay conditions) mimicked a plausible SAR - it wasn't just some random thing that made the numbers hard to interpret. No, it looks like Renslo and his team ended up optimizing for aggregation. As he put it in his presentation, "You'll find what you're looking for".
His other quote at the end of the talk was "Small molecules are much stranger than we've been led to believe", and I can't argue with that one either. Before anyone makes a comment about how his group should have checked their assay more thoroughly, or how they shouldn't have been trying to push such an unpleasant-looking series of compounds anyway - in general, about how this wouldn't have happened to you - pause for a moment, and be honest. Renslo was in this paper, and I thank him for it.
+ TrackBacks (0) | Category: Drug Assays
In response to a reader query in the comments to yesterday's post on scenic research sites, I guess we should explore the other end of the scale. Nominations for the ugliest/most depressing research site are now open. This is physical surroundings, folks, not mental atmosphere, not that that can't get oppressive at times. We're looking for things that can be captured by a camera. There can be a connections, though - as Kingsley Amis put it ("Aberdarcy, Main Square"):
The journal of some bunch of architects
Named this the worst town center they could find
But how disparage what so well reflects
Permanent tendencies of heart and mind?
Looking back, Schering-Plough's old Bloomfield site was not exactly a sweeping vista of loveliness, but (to be fair) it did look better than some of the rest of the neighborhood, and the Home Depot and parking lot that replaced it during the 1990s have probably never made anyone's heart leap, either. Sticking with the N. New Jersey sites, some of which are going to be strong contenders in this category, it's unlikely that either Merck's buildings in Rahway or Roche's in Nutley have inspired much lyric poetry. Other nominations?
Note: in the spirit of that Amis reference, those who find themselves affected by nasty industrial landscapes might want to cheer along with John Betjeman's "Slough".
+ TrackBacks (0) | Category: Drug Industry History
October 12, 2010
I was talking with some folks about this just last night - looks like Exelixis has rounded up some more money by signing a revised deal with BMS. They've been having a rough time out there recently, so I'm glad that there's a lifeline available. More on this as things become clearer. . .
+ TrackBacks (0) | Category: Business and Markets
I'm sitting in my conference, listening to a guy from Emerald Biostructures, the former deCODE. They're in a site out on Bainbridge Island near Seattle - I've talked with several people from out there, and they all talk about riding the ferry out in the morning, etc. Now, Cambridge is OK, but it ain't Bainbridge Island as far as scenery goes. (However, as someone who used to life and work in northern NJ, I have to be happy with what I have!)
So here's my question: what's the most scenic, envy-inducing location for a biopharma research site? For these purposes, we'll rank by natural beauty - if there's some biotech that's leasing the top floors of the Chrysler Building, and I sure don't think that there is, we'll take them up as a separate category. Nominations?
+ TrackBacks (0) | Category: Drug Industry History
One of the speakers here yesterday recommended Walter Sneader's Drug Discovery: A History, which I haven't read. It looks good, though, for a look back on how we got here. He also showed some drug structure "family trees" from Sneader's earlier book, Drug Prototypes and Their Exploitation. I haven't seen a copy of that one in quite a while, and no wonder: the only copy shown on Amazon is used, for $500. Sheesh.
+ TrackBacks (0) | Category: Book Recommendations | Drug Industry History
October 11, 2010
So I believe that they're moving into the new chemistry building at Princeton, which is a mighty glass whopper. In light of some of the past discussions we've had around here about lab design, I'd be interested in hearing from anyone with personal experience of the building. I can't really get a good sense of the layout from the pictures I've seen, just that there sure seem to be a lot of glass walls. And those aren't necessarily bad; it's the way the labs are put together and their relationship the desks and offices.
Interestingly, much of the money for its construction seems to have come from the university's royalties on Alimta (pemetrexed), a folate anticancer drug discovered by Ted Taylor's group there in the early 1990s and developed by Lilly. (Taylor, a heterocyclic chemistry legend, worked on antifolates for many, many years, and contributed a huge amount to the field).
Here's more on the building, and here are some photos, and here are some architectural renderings, for what those are worth. Any comments from folks on the ground?
+ TrackBacks (0) | Category: Cancer | Chemical News | Drug Industry History
You know, on reflection, one of the things that probably has me feeling strange about being in Philadelphia for this conference is that it was here that I attended my first ACS national meeting. That was August of 1984, when I was just about to start my second year in graduate school. For all I know, I attended a session in this same Sheraton. All these hotel ballrooms look pretty much the same.
Twenty-six years ago! If I sit here and try to figure out how that happened, I won't have time to take any notes here in 2010. There were slide projectors pointed at the screens back then, not LCDs, and there sure weren't any laptops to be seen. But the rows of chairs under the gaudy chandeliers, those you could superimpose on 1984 with no change at all.
+ TrackBacks (0) | Category: Chemical News
I'm out of the lab for the next few days. It's Conference Time once again, and I'm in Philadelphia for the Fragment-Based Lead Discovery meeting. Last year this one was in England, but did I go? Nooo, I waited until it was in Philly. No offense to the city's residents who read the blog, but even its partisans would have to admit that it's not an exotic destination, particularly for someone who's lived for eight years in New Jersey like I have. Anyway, any readers of the blog who are also attending, please feel free to track me down. Bernard Munos told me last week that I look just like my picture on the site, which can't quite be true, since that's getting to be an old shot, but it's apparently a reasonable guide.
I won't be live-blogging any sessions here, although I may well mention particularly interesting things as they come up. Not everyone's into fragments, for one things, and a three-day diet of them might be a bit much. And I'm going to be busy taking notes of my own, which will necessarily be skewed by my own proprietary perspective. To be honest, seeing a blow-by-blow account of what I find interesting and what I find old hat would give away too much about what my company's up to.
But I will be blogging on other topics during the meeting, thanks to the wireless in the conference room. I take notes on the laptop, anyway, since I type much more quickly (and legibly) than I write. I've got a pen handy if I have to scrawl down a structure, but otherwise, the notes are just going into a text window. Now, that does mean that I'm going to need to find an electrical outlet somewhere in this room this afternoon. . .
+ TrackBacks (0) | Category: Chemical News
October 8, 2010
You'll recall that we recently had the flap over two GSK/Sirtris executives running their own sideline business selling resveratrol as a dietary supplement. There's a lot of it out there, understandably, since the publicity around the compound has been intense for several years now. But even if it works, how likely is it that a person could take enough of it to show an effect?
A new paper goes back to the C. elegans nematode model to try to answer that question. The original life-extending results in this organism were done at 100 micromolar concentration, which is way more than any human being is going to be exposed to. Unless you're showering in the stuff, I suppose. The current study dials that back to levels that could be reached in human dosing.
What they saw was no effect on lifespan at 0.5 micromolar, which would be a realistic blood level for humans. When they turned up the concentration to 5 micromolar, there was a slight but apparently real effect of just under 4%. Now, 5 micromolar is a pretty heroic level of resveratrol - I think you could hit that as a peak concentration, but surely not hold it. The medicinal chemists in the audience will appreciate that some drug effects are driven by their Cmax, and others by their AUC, but this still seems to be a likely shortfall.
Oh, and there's another interesting part to this paper. The authors also looked at SRT1720, the resveratrol follow-up from Sirtris that has been the subject of all kinds of arguing in the recent literature. This compound is supposed to be several hundred times more potent than resveratrol itself at SIRT1, although if you've been following the story, you'll know that those numbers are widely believed to be artifacts of the assay conditions. And sure enough, the authors saw no effect on C. elegans lifespan when dosing with physiological concentrations of SRT1720. The authors finish, dryly, with:
Given the above-mentioned and conflicting findings for the efficiency of SRT1720 and the metabolic state in rodents, it is interesting to note that, as shown here, SRT1720 exerts no detectable effects on lifespan of an established model for the analysis of longevity. . .
Indeed it is. Given the recent follow-up work in this area, I can't say I'm surprised, but I am disappointed. And yes, in case anyone's wondering, I do actually hope that the Sirtris work (and other research on sirtuin compounds) leads to something good. It's just that the story is a lot messier than anyone would have liked, so far. All I have to do is look back on what I wrote just four years ago, and wonder if it really had to be this way. Did it?
+ TrackBacks (0) | Category: Aging and Lifespan
October 7, 2010
Here's a look at the layoff numbers in the drug industry, month by month so far this year. September's numbers jumped up, unfortunately, although the whole industry is not shedding jobs at the rate it was earlier this year. We're also behind last year's count, on a year-to-date basis.
Of course, slowing layoffs (if they are) is one thing. When's the last month that the pharma industry actually added head count? We've had a few months this year with very low layoff totals - did any of those go into positive territory overall, or do we have to go further back? I fear the latter, but I don't have the numbers.
+ TrackBacks (0) | Category: Business and Markets
Nature has a good report and accompanying editorial on garage biotechnology, which I wrote about earlier this year.
. . .Would-be 'biohackers' around the world are setting up labs in their garages, closets and kitchens — from professional scientists keeping a side project at home to individuals who have never used a pipette before. They buy used lab equipment online, convert webcams into US$10 microscopes and incubate tubes of genetically engineered Escherichia coli in their armpits. (It's cheaper than shelling out $100 or more on a 37 °C incubator.) Some share protocols and ideas in open forums. Others prefer to keep their labs under wraps, concerned that authorities will take one look at the gear in their garages and label them as bioterrorists.
For now, most members of the do-it-yourself, or DIY, biology community are hobbyists, rigging up cheap equipment and tackling projects that — although not exactly pushing the boundaries of molecular biology — are creative proof of the hacker principle. . .
The article is correct when it says that a lot of what's been written about the subject is hype. But not all of it is. I continue to think that as equipment becomes cheaper and more capable, which is happening constantly, that more and more areas of research will move into the "garage-capable" category. Biology is suited to this sort of thing, because there are such huge swaths of it that aren't well understood, and there are always more experiments to be set up than anyone can run.
And it's encouraging to see that the FBI isn't coming down hard on these people, but rather trying to stay in touch with them and learn about the field. Considering where and how some of the largest tech companies in the US started out, I would not want to discourage curious and motivated people from exploring new technologies on their own - just the opposite. Scientific research is most definitely not a members-only club; anyone who thinks that they have an interesting idea should come on down. So while I do worry about the occasional maniac misanthrope, I think I'm willing to take the chance. And besides, the only way we're going to be able to deal with the lunatics is through better technology of our own.
+ TrackBacks (0) | Category: Biological News | Who Discovers and Why
October 6, 2010
So, a chemistry Nobel that's just pure chemistry from top to bottom. I'll be darned! This is one that most chemists had on the list of "Worth a prize, but who knows if they'll ever get around to it". (If you check my archives, and those of the other chem-bloggers, you'll see palladium couplings mentioned every time).
One of the sticking points has been who to put on the prize, what with the three-name limit and all. Were Stille alive, he might well be on there instead of Negishi, but that just highlights the trickiness of this area. There are plenty of other people, starting, most likely, with Sonogashira, who have made major contributions in this area. I notice that some people are wondering about Buchwald and Hartwig et al., but that (to me) is a separate issue. This is a prize for carbon-carbon bond formation; carbon-nitrogen can wait its turn.
But as a chemistry prize, I think everyone can agree that palladium-catalyzed C-C bond formation is worthy. Such reactions are the single biggest change to the practice of synthesis since my grad school days. In the mid-1980s, palladium reactions were looked on as being a bit weird, and I hardly knew anyone who'd run one. I didn't have occasion to, myself, until something like 1992. By that time these reactions were well on their way to conquering the world. It's gotten to the point now where some industrial drug discovery organizations have jokingly considered banning the things for a period. They're so useful that the sorts of structures that are easy to make through them tend to get over-represented in drug screening files.
For non-chemists, the reason these things are so well used is that carbon-carbon bonds are both the backbone of organic molecules, and a pain in the rear to make and break. They're pretty solid, but not so solid that they can't be worked with under special conditions, which is why they're so useful for both living systems and for synthetic chemists. A carbon framework is like solid steel construction: very durable and hard to destroy, but if you know how to weld or rivet you can make one yourself. These palladium reactions are the equivalent of riveting; using them, we can stick whole carbon units together as if we were using power tools.
So in honor of today's prize, folks, go run yourself a Heck, Suzuki, or Negishi coupling. They'll probably work; they generally do.
+ TrackBacks (0) | Category: Chemical News
I mentioned directed evolution of enzymes the other day as an example of chemical biology that’s really having an industrial impact. A recent paper in Science from groups at Merck and Codexis really highlights this. The story they tell had been presented at conferences, and had impressed plenty of listeners, so it’s good to have it all in print.
It centers on a reaction that’s used to produce the diabetes therapy Januvia (sitagliptin). There’s a key chiral amine in the molecule, which had been produced by asymmetric hydrogenation of an enamine. On scale, though, that’s not such a great reaction. Hydrogenation itself isn’t the biggest problem, although if you could ditch a pressurized hydrogen step for something that can’t explode, that would be a plus. No, the real problem was that the selectivity wasn’t quite what it should be, and the downstream material was contaminated with traces of rhodium from the catalyst.
So they looked at using a transaminase enzyme instead. That’s a good idea, because transaminases are one of those enzyme classes that do something that we organic chemists generally can’t usually do very well – in this case, change a ketone to a chiral amino group in one step. (It takes another amine and oxidizes that on the other side of the reaction). We’ve got chiral reductions of imines and enamines, true, but those almost always need a lot of fiddling around for catalysts and conditions (and, as in this case, can cause their own problems even when they work). And going straight to a primary amine can be, in any case, one of the more difficult transformations. Ammonia itself isn’t too reactive, and you don’t have much of a steric handle to work with.
But transaminases have their idiosyncracies (all enzymes do). They generally only will accept methyl ketones as substrates, and that’s what these folks found when they screened all the commercially available enzymes. Looking over the structure (well, a homology model of the structure) of one of these (ATA-117), which would be expected to give the right stereochemistry if it could be made to give anything whatsoever, gave some clues. There’s a large binding pocket on one side of the ketone, which still wasn’t quite large enough for the sitagliptin intermediate, and a small site on the other side, which definitely wasn’t going to take much more than a methyl group.
They went after the large binding pocket first. A less bulky version of the desired substrate (which had been turned, for now, into a methyl ketone) showed only 4% conversion with the starting enzymes. Mutating the various amino acids that looked important for large-pocket binding gave some hope. Changing a serine to phenylalanine, for example, cranked up the activity by 11-fold. The other four positions were, as the paper said, “subjected to saturation mutagenesis”, and they also produced a combinatorial library of 216 multi-mutant variations.
Therein lies a tale. Think about the numbers here: according to the supplementary material for the paper, they varied twelve residues in the large binding pocket, with (say) twenty amino acid possibilities per. So you’ve got 240 enzyme variants to make and test. Not fun, but it’s doable if you really want to. But if you’re going to cover all the multi-mutant space, that’s twenty to the 12th, or over four quadrillion enzyme candidates. That’s not going to happen with any technology that I can easily picture right now. And you’re going to want to sample this space, because enzyme amino acid residues most certainly do affect each other. Note, too, that we haven’t even discussed the small pocket, which is going to have to be mutated, too .
So there’s got to be some way to cut this problem down to size, and that (to my mind) is one of the things that Codexis is selling. They didn’t, for example, get a darn thing out of the single-point-mutation experiments. But one member of a library of 216 multi-mutant enzymes showed the first activity toward the real sitagliptin ketone precursor. This one had three changes in the small pocket and that one P-for-S in the large, and identifying where to start looking for these is truly the hard part. It appears to have been done through first ruling out the things that were least likely to work at any given residue, followed by an awful lot of computational docking.
It’s not like they had the Wonder Enzyme just yet, although just getting anything to happen at all must have been quite a reason to celebrate. If you loaded two grams/liter of ketone, and put in enzyme at 10 grams/liter (yep, ten grams per liter, holy cow), you got a whopping 0.7% conversion in 24 hours. But as tiny as that is, it’s a huge step up from flat zero.
Next up was a program of several rounds of directed evolution. All the variants that had shown something useful were taken through a round of changes at other residues, and the best of these combinations were taken on further. That statement, while true, gives you no feel at all for what this stuff is like, though. There are passages like this in the experimental details:
At this point in evolution, numerous library strategies were employed and as beneficial mutations were identified they were added into combinatorial libraries. The entire binding pocket was subjected to saturation mutagenesis in round 3. At position 69, mutations TAS and C were improved over G. This is interesting in two aspects. First, V69A was an option in the small pocket combinatorial library, but was less beneficial than V69G. Second, G69T was improved (and found to be the most beneficial in the next
round) suggesting that something other than sterics is involved at this position as it was a Val in the starting enzyme. At position 137, Thr was found to be preferred over Ile. Random mutagenesis generated two of the mutations in the round 3 variant: S8P and G215C. S8P was shown to increase expression and G215C is a surface exposed mutation which may be important for stability. Mutations identified from homologous enzymes identified M94I in the dimer interface as a beneficial mutation. In subsequent rounds of evolution the same library strategies were repeated and expanded. Saturation mutagenesis of the secondary sphere identified L61Y, also at the dimer interface, as being beneficial. The repeated saturation mutagenesis of 136 and 137 identified Y136F and T137E as being improved.
There, that wasn’t so easy, was it? This should give you some idea of what it’s like to engineer an enzyme, and what it’s like to go up against a billion years of random mutation. And that’s just the beginning – they ended up doing ten rounds of mutations, and had to backtrack some along the way when some things that looked good turned out to dead-end later on. Changes were taken on to further rounds not only on the basis of increased turnover, but for improved temperature and pH stability, tolerance to DMSO co-solvent, and so on. They ended up, over the entire process, screening a total of 36,480 variations, which is a hell of a lot, but is absolutely infinitesmal compared to the total number of possibilities. Narrowing that down to something feasible is, as I say, what Codexis is selling here.
And what came out the other end? Well, recall that the known enzymes all had zero activity, so it’s kind of hard to calculate improvement from that. Comparing to the first mutant that showed anything at all, they ended up with something that was about 27,000 times better. This has 27 mutations from the original known enzyme, so it’s a rather different beast. The final enzyme runs in DMSO/water, at loadings up of to 250g/liter of starting material at 3 weight per cent enzyme loading, and turns isopropylamine into acetone while it’s converting the prositagliptin ketone to product. It is completely stereoselective (they’ve never seen the other amine), and needless to say involves no hydrogen tanks and furnishes material that is not laced with rhodium metal.
This is impressive stuff. You'll note, though, the rather large amount of grunt work that had to go into it, although keep in mind, the potential amount of grunt work would be more than the output of the entire human race. To date. Just for laughs, an exhaustive mutational analysis of twenty-seven positions would give you 1.3 times ten to the thirty-fifth possibilities to screen, and that's if you know already which twenty-seven positions you're going to want to look at. One microgram of each of them would give you the mass of about a hundred Earths, not counting the vials. Not happening.
Also note that this is the sort of thing that would only be done industrially, in an applied research project. Think about it: why else would anyone go to this amount of trouble? The principle would have been proven a lot earlier in the process, and the improvements even part of the way through still would have been startling enough to get your work published in any journal in the world and all your grants renewed. Academically, you'd have to be out of your mind to carry things to this extreme. But Merck needs to make sitagliptin, and needs a better way to do that, and is willing to pay a lot of money to accomplish that goal. This is the kind of research that can get done in this industry. More of this, please!
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | Drug Development
October 5, 2010
Here's an interesting example of a way that synthetic chemistry is creeping into the provinces of molecular biology. There have been a lot of interesting ideas over the years around the idea of polymers made to recognize other molecules. These appear in the literature as "molecularly imprinted polymers", among other names, and have found some uses, although it's still something of a black art. A group at Cal-Irvine has produced something that might move the field forward significantly, though.
In 2008, they reported that they'd made polymer particles that recognized the bee-sting protein melittin. Several combinations of monomers were looked at, and the best seemed to be a crosslinked copolymer with both acrylic acid and an N-alkylacrylamide (giving you both polar and hydrophobic possibilities). But despite some good binding behavior, there are limits to what these polymers can do. They seem to be selective for melittin, but they can't pull it out of straight water, which is a pretty stringent test. (If you can compete with the hydrogen-bonding network of bulk water that's holding the hydrophilic parts of your target, as opposed to relying on just the hydrophobic interactions with the other parts, you've got something impressive).
Another problem, which is shared by all polymer-recognition ideas, is that the materials you produce aren't very well defined. You're polymerizing a load of monomers in the presence of your target molecule, and they can (and will) link up in all sorts of ways. So there are plenty of different binding sites on the particles that get produced, with all sorts of affinities. How do you sort things out?
Now the Irvine group has extended their idea, and found some clever ways around these problems. The first is to use good old affinity chromatography to clean up the mixed pile of polymer nanoparticles that you get at first. Immobilizing melittin onto agarose beads and running the nanoparticles over them washes out the ones with lousy affinity - they don't hold up on the column. (Still, they had to do this under fairly high-salt conditions, since trying this in plain water didn't allow much of anything to stick at all). Washing the column at this point with plain water releases a load of particles that do a noticeably better job of recognizing melittin in buffer solutions.
The key part is coming up, though. The polymer particles they've made show a temperature-dependent change in structure. At RT, they're collapsed polymer bundles, but in the cold, they tend to open up and swell with solvent. As it happens, that process makes them lose their melittin-recognizing abilities. Incubating the bound nanoparticles in ice-cold water seems to only release the ones that were using their specific melittin-binding sites (as opposed to more nonspecific interactions with the agarose and the like). The particles eluted in the cold turned out to be the best of all: they show single-digit nanomolar affinity even in water! They're only a few per cent of the total, but they're the elite.
Now several questions arise: how general is this technique? That is, is melittin an outlier as a peptide, with structural features that make it easy to recognize? If it's general, then how small can a recognition target be? After all, enzymes and receptors can do well with ridiculously small molecules: can we approach that? It could be that it can't be done with such a simple polymer system - but if more complex ones can also be run through such temperature-transition purification cycles, then all sorts of things might be realized. More questions: What if you do the initial polymerization in weird solvents or mixtures? Can you make receptor-blocking "caps" out of these things if you use overexpressed membranes as the templates? If you can get the particles to the right size, what would happen to them in vivo? There are a lot of possibilities. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical Biology | Chemical News | Drug Assays
October 4, 2010
Well, this doesn't look encouraging. As part of its restructuring after buying Schering-Plough, Merck announced some time back that it's shedding the former Organon sites in Newhouse in Scotland and Schaijk in the Netherlands.
How's that going? Well, a correspondent forwarded me an unsolicited email he just received from "Partner International", the company hired by Merck to help divest these sites. And apparently Partner's strategy includes. . .spamming people with a one-page brochure touting this "Time Sensitive Acquisition Opportunity" for these "world class research opportunities".
My correspondent, regrettably, finds himself a bit short this month and unable to purchase either of these research sites. Perhaps someone else will idly browse their inbox and take Partner International up on this time-sensitive offer. If the message gets through the spam filter, that is.
+ TrackBacks (0) | Category: Business and Markets
As of this morning. It looks like they were getting nowhere with Genzyme's board, so they're taking their same $69/share offer directly to the shareholders.
I'm not sure if that's going to be enough for them, but I presume that Sanofi-Aventis has already sounded out some of the institutional investors before going ahead. This isn't one of those questions you ask unless you're reasonably sure of the answer. But hostile bids do fail (or get their terms sweetened along the way). We've got until midnight, December 10, which is a long enough window for a lot of things to happen. Plenty of time to get some popcorn and find a good seat. . .
+ TrackBacks (0) | Category: Business and Markets
October 1, 2010
Now here's a disturbing case: research sabotage. It involves a (former) postdoc at Michigan:
(Vipul) Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work.
The student's account of what happened (later in that linked article) is creepy and compelling. Things started going wrong with her experiments, one after the other. At first she couldn't figure out what was happening, then the suspected her own mistakes, but ultimately (like the man who furnished the title for this post), suspected sabotage.
What tipped her off, apparently, was the the same sorts of things went wrong over and over - and when one was fixed, something else would appear. Lanes looked switched on her Western blots, which turned out to be because the labels had been switched on her cell cultures. That happened a few times, then when she switched the labeling system to something that couldn't be messed with, contaminants started showing up in her media. Running experiments in another lab late at night showed that they were working the way they should - when something (someone) wasn't messing with them.
It would certainly take a while for sabotage to become a working hypothesis - after all, there are a lot of ways for things not to work. But this case seems to have been helped along by the crudity of the tampering. Even so, there was suspicion that the grad student herself was trying to blame someone else for her own failures. The university's public-safety officers put her through interrogations before they would go as far as installing cameras in her lab. But those cameras caught the post-doc messing around in the lab fridge in the hours before yet another experiment went awry, and he confessed when confronted.
How often does stuff like this go on? To be honest, I'm surprised that there isn't more of it in academic labs. The competition between individuals is much more fierce than it is in industry (where people tend to work much more in large teams), and frankly, there are more unstable personalities in academia than there are in industry as well. At the same time, this is a thoroughly nasty thing to do, striking right at the basic workings of any research lab. You have to be able to reproduce things, of course, and you have to trust that the reagents and equipment are going to allow you to do that.
Most of all, in science we have to take the word of others on trust pretty often. Experiments are always out there to be reproduced, but you really can't do that to everything, every time. It's just impossible. When someone says that they got a particular reaction to work, or protein to express, the default setting is to assume that yeah, they probably did. If you have to reproduce it and there's trouble, well, then you start checking things out step by step. But there's no way science could work if you automatically assumed that everything in the literature or in every presentation was probably a lie.
And there's no way it can work if someone's going to sabotage experiments, either. I have been around two or three situations in my career when there was a suspicion of this happening. For most of the cases I'm pretty sure that this wasn't the explanation, but in one other (which I was the most removed from) I still don't know. That one got me to thinking, though, about how terribly easy it would be to do such things. As I said, this case was pretty crude, but there are many, many more subtle ways of messing things up. Some of them would be quite hard to detect, but would definitely indicate foul play if they were, and some of them would still be obscure even if tracked down. Truly excellent sabotage, though, would require as much work as generating real results.
I'm not going into details - any scientist with sufficient imagination can think of such things (homo sum, humani nil a me alienum puto). But it is interesting, when you do that thought exercise, how strange it feels. You can see how it would be done, you can see what would motivate someone to do it, but it's something of a relief to find out how little thought you've given to the mechanics of it all.
+ TrackBacks (0) | Category: The Dark Side