About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
May 30, 2008
Well, while the mail continues to come in about my post yesterday, I’m going to pull back from the global perspective and zoom back into the glassware drawers of my lab bench today. A while back I wrote about the different sizes of ground glass joints that organic chemists typically use. People from outside the field are sometimes struck by the fact that we don’t have to do as much glassblowing and the like as they might have thought. Decades ago there was a lot more, but for a long time now we’ve been able to build up all sorts of apparatus (apparati?) by connecting standardized glass fittings together.
This has all sorts of advantages, letting us assemble odd custom configurations pretty easily, and change them without too much work. The downside is that the ground glass joints aren’t by themselves vacuum tight – not by the standards of inorganic chemists, for sure – and need to be anointed with thick, nasty vacuum grease before they can be trusted to that level. And if you don’t grease them for normal work, which we tend not to because the grease gets into your compounds, then the joints tend to freeze if left too long or too tight.
There are all sorts of voodoo tricks for unsticking them. I pride myself on being able to do it, but (objectively) I don’t think my success rate is all that greater than the norm. For the record, my technique is to put a few drops of silicon bath oil up around the edge of the stuck connection and let it soak in for a few hours. Then I rapidly heat the outside joint, grab it with a towel, and do the usual pulling and tapping while hoping for the best. There are better ways, but they're typically found only in a glassblowing shop.
When I last wrote about this fascinating subject (hey, chemists like their glassware), I mentioned that I’d gotten in the habit of using 29/42 size joints. (That’s a measure of size: the first number is a diameter, and the second is the length or taper). That’s a larger one than is common in American labs; you see it more in Germany, among other places. I’m so used to it now that the standard 24/40 glass joints you see all over the place look narrow and shrunken to me – will I really be able to get my product out of that?
The standard small size these days is 14/20 – that’s the size of all our 5, 10, and 25 milliter flasks. (You can get 100 mL flasks (or larger) with that size joint, too, but they start to look disproportionate and weird, and there’s no real reason for large flasks to have such a small neck). In between that and good ol’ 24/40, though is the 19/22 size, which I really should look at again. It would be the wide-mouth counterpart to 14/20, in the same way that 29/42 is to 24/40. I’d probably like it.
But I’ve hardly seen a flask of that size since I was an undergraduate, and that whole range of glassware immediately recalls sophomore organic chemistry labs. I wondered why that was, but now I have the story thanks to reader Norm Neill of glassmaker NDS Technologies, who saw its birth at Kontes:
"The 19/22 Glassware kit was developed jointly by Eric Nyberg from Kontes Glass and Dr Howard Martin from Lake Forest College in the late 1950's. . .they wanted to scale down the size of the glassware from the traditional 24/40 glassware to something smaller so it could be issued as a complete kit to a student and locked in his lab drawer. . .The next size down from 24/40 is 19/38 but the joint length was too long to allow us to scale down the kit (and) fit into a standard lab bench drawer. The 19/22 medium length joint was the best trade off at the time. . .The packaging of the kit was so popular that during the early 1960's production had to be allocated. The overwhelming success of the 19/22 glassware started the development of an extensive line of 14/20 glassware under the Bantamware® brand."
It's my impression that the 14/20 glassware has been taking over the student market in recent years as well, what with the move to smaller and smaller amounts of solvents and reagents. That makes me wonder if 19/22 glass has a future, which means that I'll probably find some lunatic reason to switch my small-scale stuff to it really soon, giving me the most oddball glass collection in the place. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
May 29, 2008
Since I was talking the other day about the analytical habit of mind, this is a good time to link to an article by someone who has it like few other people alive: Freeman Dyson, who is thankfully still with us and still thinking hard. At the moment, he seems to be thinking about something that involves chemistry, physics, economics, and plenty of politics.
He has an article in the latest New York Review of Books that is one of the most sensible things I have ever seen on the issue of global warming. I strongly urge people to read it, because it’s a perspective that you don’t often see. (It ends, in fact, with a small note of despair at how seldom that particular viewpoint comes up). I found it particularly interesting, as you might guess, because I agreed with it a great deal.
Dyson stipulates at the beginning that carbon dioxide levels are, in fact, rising, and that they have been for some time. And he also is willing to stipulate that this will lead, other factors being equal, to a rise in global temperatures. He doesn’t get into the details, although there are endless details to get into, but goes on to make some larger points.
One of them is economic. One of the books he’s reviewing, by economist William Nordhaus, is an attempt to work out the best course of action. Nordhaus is not denying a problem, to put it mildly: his estimate comes out to about 23 trillion dollars of harm in the next hundred years (in constant dollars, yet) if nothing is done at all. The question is, how much will the various proposed solutions cost in comparison?
His numbers come out this way: the best current policy he can come up with, a carefully tuned carbon tax that increases year by year, comes out to only 20 trillion of damage, as opposed to 23 – that is, plus three trillion constant dollars. The Kyoto Protocol, turned down by the US Senate during the Clinton years, comes out to 22 trillion dollars of harm (one trillion to the good) if the US were to participate, and completely even (no good whatsoever) without the US. The Stern Review plan, endorsed by the British government, comes out to 37 trillion dollars of total harm, and Al Gore’s proposed policies come out down 44 trillion dollars: that is, twenty-one trillion dollars worse than doing nothing at all.
As Dyson correctly points out, these latter two proposals appear to be “disastrously expensive”. And the problem with such courses of action are that this money could be used for something better: Nordhaus also calculates the effect of finding some reasonably low-cost method to cut back on carbon dioxide emissions, such as a more efficient means of generating solar or geothermal power, the advent of genetically engineered plants with a high carbon-sequestering ability, etc. That general route comes out to roughly 6 trillion dollars of total harm, which is seventeen trillion better than doing nothing (and thirty-eight trillion better than the Full Albert). That’s by far the most attractive solution, if it can be realized. But doing an extra ten or twenty trillion dollars of damage to the global economy will make that rather unlikely, if we choose to do that.
And there are other effects. To quote Dyson:
” The practical consequence of the Stern policy would be to slow down the economic growth of China now in order to reduce damage from climate change a hundred years later. Several generations of Chinese citizens would be impoverished to make their descendants only slightly richer. According to Nordhaus, the slowing-down of growth would in the end be far more costly to China than the climatic damage.”
But there’s a factor that neither of the books he reviews mentions: that atmospheric carbon dioxide exchanges, on a relatively fast time scale, with the Earth’s vegetation. About eight per cent of it a year cycles back and forth, and that hold out hope for a biotech solution. Engineered organisms could fix this carbon into useful forms, or (failing that) just take out out of circulation completely. But we need to go full speed ahead on research to realize that.
The last part of his review addresses a larger question. Environmentalism, he states, is now more of a religious question than anything else. (Other people have realized that, and many who do bemoan the fact, but Dyson has no problem with it, saying that the ethics of environmentalism are “fundamentally sound”.) But here’s his problem:
”Unfortunately, some members of the environmental movement have also adopted as an article of faith the belief that global warming is the greatest threat to the ecology of our planet. That is one reason why the arguments about global warming have become bitter and passionate. Much of the public has come to believe that anyone who is skeptical about the dangers of global warming is an enemy of the environment. The skeptics now have the difficult task of convincing the public that the opposite is true. Many of the skeptics are passionate environmentalists. They are horrified to see the obsession with global warming distracting public attention from what they see as more serious and more immediate dangers to the planet. . .”
The distressing thing, as he mentions, is that many organizations (including, I'm sorry to say, the Royal Society among other groups of scientists), have decided that the issue is settled and that anyone dissenting from this view is to be slapped down. As for me, I’m not completely convinced by the current climate data, so I probably am to the right even of Dyson on this issue. Here he is, though, willing to stipulate that most of the basic assumptions are true, but finding no place for someone who can do that and still not see global warming as the Single Biggest Issue Of Our Time.
I know how he feels: I consider myself an advocate of the environment, but I think the best way to preserve it is to do more genetic engineering rather than less. Better crops will mean that we don’t have to plow up more land to feed everyone, and we won’t have to dump as many insecticides and herbicides on that land we’re using. That means that I also think the best way to preserve unspoiled spaces is to do less organic farming, and not more: organic farming, particularly the hard-core varieties, uses too much land to generate too little food, and it does so mainly to give people in wealthy countries a chance to feel good about themselves.
And I think the best way to preserve wild areas and biodiversity is to have more free trade and economic development, not to slow it down. Richer countries have lower birth rates, for one thing. (I actually think that the planet would be better off with fewer people on it, but I’m not willing to achieve that goal by killing off a few billion of us).
And finally, economic growth is what’s giving us the chance to find technologies to get us out of our problems. I know that there’s another way to look at it – that the technology we have got us into this problem, and that we should reverse course. But I don’t think that’s even possible, or desirable. I’d rather have engineered plants cleaning out the atmosphere, and I’d rather have electricity from fusion or orbiting solar arrays. I’d rather find cheaper ways to get some of our fouler industries off the planet entirely, and mine the asteroids and comets. I’d rather people get richer and smarter, with more time and resources to do what they enjoy. How we’re going to do any good by putting on hair shirts and confessing our sins escapes me.
+ TrackBacks (0) | Category: Business and Markets | Current Events | General Scientific News
May 28, 2008
So Takeda has opened up its roomy wallet once again, and signed on with Alnylam for a nonexclusive partnership in oncology and metabolics. The InVivoBlog has all the details, but the main point is that Takeda had to put $100 million down at the beginning, with all the milestones, options, and extras coming after that. And Alnylam’s CEO seems to be saying that he’s not going to bother with any offers down in the mere double-digit millions, so don’t waste the man’s time. Roche didn’t – they signed a non-exclusive deal of their own with the company last year.
There are several interesting things about this. One is that Takeda is really in a deal-making mode, apparently, which (historically) has been unusual for a Japanese company. But no Japanese drug company has ever quite been in the position that they find themselves in – a big international player with patent expirations coming – so I guess we should expect something new. More remarkable, though, is the nonexclusive nature of all these deals that Alnylam is making. Other things being equal, of course, larger drug companies much prefer exclusive deals, or a complete buyout. That's what Merck did with Sirna in this same area, in what was no cheap deal, and one that led to Alnylam terminating their own Merck agreement. In this case, though, the amount of money for such terms has apparently been too much for anyone to handle, or Alnylam has perhaps just refused to go exclusive. It’s worth thinking about the position they feel they must be in, to make that stick.
The last time I can remember a situation like this was when the genomics frenzy was on. And I think the RNAi business is turning into something very similar, for very similar reasons: fear and greed, the two flywheels of the financial world. We'll take the greed as stipulated, since the whole purpose of modern capitalism is to harness its mighty and potentially destructive force. But the fear, in both cases, was the very real fear of being left behind when a rare landscape-altering technology is potentially coming on. If there really had been dozens of good ready-for-prime-time targets lurking out there in the genomic data, well, the companies that sewed them up would do very well, and the ones that didn’t would eat dirt. So better to spend the money, right? And so it is with RNA interference: if it really does work therapeutically, there are going to be a lot of previously-undruggable targets within reach, as well as a lot of new shots at the ones we already know. So. . .better to spend the money again?
I suppose there’s no way around it, even though I’m not convinced that RNAi is going to deliver any time soon (or at all?) After all, its difficulties seem (to me) very much like those of antisense DNA, subject of yet another train’s-leaving-the-station investing frenzy in the late 1980s and early 1990s. For one thing, delivering these oligonucleotides in a living human is definitely nontrivial, to use a word that scientists and engineers use to mean anything from “pretty damn hard” to “impossible at the present level of human civilization”. I don’t think that RNA therapy is in the second category, but I do think that it’s in the first category good and hard.
And there’s the whole question of off-target effects, which I’ve spoken about here before. These may not be show-stoppers, true, but the problem is that we don’t know if they are or not. At the very least, it’s a complicating factor, and a big one – and the fact that it’s out there makes you wonder what other interesting complications are yet to be discovered as we go into humans.
So no, RNAi is not going to remake the landscape later this year or anything. It’s going to be a long business, with (I feel sure of it) plenty of expensive head-slapping and hand-wringing along the way. But all that said, can a company like Takeda (or Roche, or Merck, or. . .) afford to ignore it? After all, by the time the kinks are worked out of the technology, it’s presumably going to be too late to buy into it. (Or if you can, it’s going to make the 2008 prices look like the discount rack). Perhaps it’s better to just decide that that’s what the money’s for, to buy into things that could pay off big, with the realization that most of those purchases are going to look idiotic in ten years. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
May 27, 2008
My wife and I were talking over dinner the other night – she’d seen some interview with the owner of a personal data protection service, and he made the pitch for his company by saying something about how out of (say) a million customers, only one hundred had ever reported any attempts on their credit information or the like. And my wife, who spent many years in the lab, waiting for what seemed to her to be the obvious follow-up question: How many people out of a million that didn’t subscribe to this guy’s service report such problems?
But (to her frustration) that question was never asked. We speculated about the reasons for that, partly out of interest and partly as a learning experience for our two children, who were at the table with us. We first explained to them that both of us, since we’d done a lot of scientific experiments, always wanted to see some control-group data before we made up our minds about anything – and in fact, in many cases it was impossible to make up one’s mind without it.
After a brief excursion to talk about the likely backgrounds and competencies of news readers on TV, we then went on to say that looking for a control set isn’t what you could call a universal habit of mind, although it's a useful one to have. You don’t have to have scientific training to think that way (although it sure helps), but anyone with a good eye for business and finance asks similar questions. And as we told the kids, both of us had also seen (on the flip side) particularly lousy scientists who kept charging ahead without good controls. Still, the overlap with a science and engineering background is pretty good.
What I’ve wondered, since that night is how many people, watching that same show, had the same question. That would be a reasonable way to determine how many of them have the first qualification for analyzing the data that come their way. And I’m just not sure what the percentage would be, for several reasons. For one thing, I’ve been working in the lab for years now, so such thinking is second nature to me. And for another, I’ve been surrounded for an equal number of years, by colleagues and friends who tend to have science backgrounds themselves, so it’s not like my data set is representative of the population at large.
So I’d be interested in what the readership thinks, not that the readership around here is any representative slice of the general population, either. But in your experience, how prevalent do you think that analytical frame of mind is? The attitude I’m talking about is the one that when confronted with some odd item in the news, says “Hmm, I wonder if that's true? Have I got enough information to decide?" It's an essential part of being a scientist, but if you're not. . .?
+ TrackBacks (0) | Category: General Scientific News | Who Discovers and Why
May 23, 2008
Something that’s come up in the last few posts around here is the way that we chemists think about the insides of enzymes. It’s a tricky subject, because when you picture things on that scale, the intuition you have for objects starts to betray you.
Consider water. We humans have a pretty good practical understanding of how water behaves in the bulk phase; we have the experience. But what about five water molecules sitting in the pocket of an enzyme? That’s not exactly a glass from the tap. These guys are interacting with the protein as much (or more) than they’re interacting with each other, and our intuition about water molecules is based on how they act when it’s surrounded by plenty of their own.
And if five water molecules are hard to handle, how about one? There’s no hope of seeing any bulk properties now, because there’s no bulk. We’re more used to having trouble in the other direction, predicting group behavior from individuals: you can’t tell much about a thousand-piece jigsaw puzzle from one piece that you found under the couch, and you wouldn’t be able to say much about the behavior of an ant colony from observing one ant in a jar. And neither of those are worth very much, compared to their group. But with molecules, the single-ant-in-a-jar situation is very important (that’s a single water molecule sitting in the active site of an enzyme), and knowledge of ant social behavior or water’s actions in a glass doesn’t help much.
Larger molecules than water are our business, of course, and those are tricky, too. We can study the shape and flexibility of our drug candidates in solution (by NMR, to pick the easiest method), and in the solid phase, surrounded by packed arrays of themselves (X-ray crystal structures). But the way that they look inside an enzyme's active site doesn't have to be related to either of those, although you might as well start there.
As single-molecule (and single-atom) techniques have become more possible, we're starting to get an idea of how small clusters of them have to be before they stop acting like tiny pieces of what we're used to, and starts acting like something else. But these experiments are usually done in isolation, in the gas phase or on some inert surface. The inside of a protein is another thing entirely; molecules there are the opposite of isolated. And studying them in those small spaces is no small task.
+ TrackBacks (0) | Category: In Silico
May 22, 2008
Benjamin Cravatt at Scripps has another interesting paper out this week – by my standards, he hasn’t published very many dull ones. I spoke about some earlier work of his here, where his group tried to profile enzymes in living cells and found that the results they got were much different than the ones seen in their model systems.
This latest paper is in the same vein, but addresses some more general questions. One of his group members (Eranthi Weerapana, who certainly seems to have put in some lab time) started by synthesizing five simple test compounds. Each of them had a reactive group on them, and each molecule had an acetylene on the far end. The idea was to see what sorts of proteins combined with the reactive head group. After labeling, a click-type triazole reaction stuck a fluorescent tag on via the acetylene group, allowing the labeled proteins to be detected.
All this is similar to the previous paper I blogged about, but in this case they were interested in profiling these varying head groups: a benzenesulfonate, an alpha-chloroamide, a terminal enone, and two epoxides – one terminal on a linear chain, and the other a spiro off a cyclohexane. All these have the potential to react with various nucleophilic groups on a protein – cysteines, lysines, histidines, and so on. Which reactive groups would react with which sorts of protein residues, and on which parts of the proteins, was unknown.
There have been only a few general studies of this sort. The most closely related work is from Daniel Liebler at Vanderbilt, who's looking at this issue from a toxicology perspective ( try here , here, and here). And an earlier look at different reactive groups from the Sames lab at Columbia is here, but that was much less extensive.
Cravatt's study reacted these probes first with a soluble protein mix from mouse liver – containing who knows how many different proteins – and followed that up with similar experiments with protein brews from heart and kidney, along with the insoluble membrane fraction from the liver. A brutally efficient proteolysis/mass spectroscopy technique, described by Cravatt in 2005, was used to simultaneously identify the labeled proteins and the sites at which they reacted. This is clearly the sort of experiment that would have been unthinkable not that many years ago, and it still gives me a turn to see only Cravatt, Weerapana, and a third co-author (Gabriel Simon) on this one instead of some lab-coated army.
Hundreds of proteins were found to react, as you might expect from such simple coupling partners. But this wasn’t just a blunderbuss scatter; some very interesting patterns showed up. For one thing, the two epoxides hardly reacted with anything, which is quite interesting considering that functional group’s reputation. I don’t think I’ve ever met a toxicologist who wouldn’t reject an epoxide-containing drug candidate outright, but these groups are clearly not as red-hot as they’re billed. The epoxide compounds were so unreactive, in fact, that they didn’t even make the cut after the initial mouse liver experiment. (Since Cravatt’s group has already shown that more elaborate and tighter-binding spiro-epoxides can react with an active-site lysine, I’m willing to bet that they were surprised by this result, too).
The next trend to emerge was that the chloroamide and the enone, while they labeled all sorts of proteins, almost invariably did so on their cysteine (SH) residues. Again, I think if you took a survey of organic chemists or enzymologists, you’d have found cysteines at the top of the expected list, but plenty of other things would have been predicted to react as well. The selectivity is quite striking. What’s even more interesting, and as yet unexplained, is that over half the cysteine residues that were hit only reacted with one of the two reagents, not the other. (Leibler has seen similar effects in his work).
Meanwhile, the sulfonate went for several different sorts of amino acid residues – it liked glutamates especially, but also aspartate, cysteine, tyrosine, and some histidines. One of the things I found striking about these results is how few lysines got in on the act with any of the electrophiles. Cravatt's finely tuned epoxide/lysine interaction that I linked to above turns out, apparently, to be a rather rare bird. I’ve always had lysine in my mind as a potentially reactive group, but I can see that I’m going to have adjust my thinking.
Another trend that I found thought-provoking was that the labeled residues were disproportionately taken from the list of important ones, amino acids that are involved in the various active sites or in regulatory domains. The former may be intrinsically more reactive, in an environment that has been selected to increase their nucleophilicity. And as for the latter, I’d think that’s because they’re well exposed on the surfaces of the proteins, for one thing, although they may also be juiced up in reactivity compared to their run-of-the-mill counterparts.
Finally, there’s another result that reminded me of the model-system problems in Cravatt’s last paper. When they took these probes and reacted them with mixtures of amino acid derivatives in solution, the results were very different than what they saw in real protein samples. The chloroamide looked roughly the same, attacking mostly cysteines. But the sulfonate, for some reason, looked just like it, completely losing its real-world preference for carboxylate side chains. Meanwhile, the enone went after cysteine, lysine, and histidine in the model system, but largely ignored the last two in the real world. The reasons for these differences are, to say the least, unclear – but what’s clear, from this paper and the previous ones, is that there is (once again!) no substitute for the real world in chemical biology. (In fact, in that last paper, even cell lysates weren’t real enough. This one has a bit of whole-cell data, which looks similar to the lysate stuff this time, but I’d be interested to know if more experiments were done on living systems, and how close they were to the other data sets).
So there are a lot of lessons here - at least, if you really get into this chemical biology stuff, and I obviously do. But even if you don't, remember that last one: run the real system if you're doing anything complicated. And if you're in drug discovery, brother, you're doing something complicated.
+ TrackBacks (0) | Category: Biological News | Toxicology
May 21, 2008
I’ve been in this business for almost 19 years now. That means that the drugs that were discovered during my first few years of work are now either on the market or expected to be there soon. Fine, I spent my first eight years at Schering-Plough, so what do I see when I look back? There’s ezetimibe, discovered by sheer chance (but developed by sheer determination, though) and the thrombin receptor antagonist, squirrelly chemical matter from a failed Alzheimer’s program, a compound that a lot of medicinal chemists wouldn’t have even made in the first place. Well, now.
This is not a whack at Schering-Plough. Far from it. These are compounds that any organization would have been glad to find, but they weren’t exactly found by direct routes. This is a general phenomenon. You’d think, surveying the industry, that a lot of drugs are discovered, at least partly, by outright luck. And as far as I can tell, you’d be right. Realizing that tends to bring on several different reactions, depending on your world view:
That can’t be right. I’ve seen this one mostly from people outside the immediate realm of drug discovery, well-meaning people who just can’t believe that this is how it works. The harm comes when these well-meaning folks decide that the problem is that the industry is just behind the times, and that we wouldn’t have to do it this way if we’d just adopt some modern management techniques – ISO whatever-thousand, umpteem-sigma, Quality Assurance Tiger Team Circle Continuous Improvement Metrics, or what have you. Harm generally ensues.
That shouldn’t be right. Some of the people in this category are actually offended by the sight of luck calling so many of the shots, while others are just hoping for a more productive way of doing things. A lot of computational approaches have come from this attitude: “We wouldn’t have to run around stumbling over stuff if we’d just turn on this great new flashlight that’s just been invented” Nothing’s quite illuminated the landscape in the way that people have hoped, though, although efforts continue, as they should.
OK, if we’re stumbling around, let’s stumble faster. This is the basic idea behind the improvements in high-throughput screening and combichem in the late 1980s and the 1990s. For a while, the more optimistic folks thought that this would be enough: just crank out millions of compounds, and the drugs would come – they’d have to. It didn’t work that way, partly because the space of usable chemical structures is much, much larger than we can usefully deal with. But that’s not to say that cranking out more compounds and screening them more quickly isn’t a good idea – it’s just not the good idea.
Well, stumble more purposefully, then. I think that this is where most drug discovery organizations are (or should be). You admit that luck has a big role to play, but you go for the “Fortune favors the prepared mind” approach. Don’t rely just on random runs of odd structures to fill your screening banks – but be sure to put some in, because you never know. Turn over every rock – but recognize that you can’t turn over every rock everywhere, so try to pick the most likely place to start.
The problem with this approach is that it doesn’t promise much, at least compared to the various You’re Doing It Wrong approaches, and it doesn’t make a very compelling PowerPoint slide. But although it’s the blood-toil-tears-and-sweat option, I think that for now it’s the right one. Until something better comes along, that is, and the fascinating problem is that something better is always coming along. Given this state of affairs, why shouldn’t it?
I have no room to talk, of course. I can be as much of a sucker as the next medicinal chemist for some new approach that’s going to change everything – mainly because I look around and realize that a lot of what we do would be better off changing. All the wasted effort. . .you can get downright melancholy if you look at the business from the saddest angles. For all my self-proclaimed realism, I probably have more of that second response in me than I like to admit. The idea is to keep trying for something dramatically better, while realizing that even a smaller improvement would still be worth a lot. . .
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
May 20, 2008
For those who were wondering, my copper reactions the other day worked out just fine. They started out a beautiful blue (copper iodide and an amino acid in straight DMSO – if that’s not blue it’s maybe going to be green, and if it’s not either one you’ve done something wrong). Of course, the color doesn’t stay. The copper ends up as part of a purple-brown sludge that has to be filtered out of the mix, which is the main downside of those Ullman reactions, no matter how people try to scrub them up for polite company.
And DMSO is the other downside, because you have to wash that stuff out with a lot of water. That’s one of the lab solvents that everyone has heard of, even if they slept through high school chemistry. But it’s not one that we use for reactions very much, because it’s something of a pain. It dissolves most everything, which is a good quality, but along with that one comes the ability to contaminate most everything. If your product is pretty greasy and nonpolar, you can partition the reaction between water and some more organic solvent (ether’s what I used this time), and wash it around a lot. But if your product is really polar, you could be in for a long afternoon.
That mighty solvation is something you need to look out for if you spill the stuff on yourself, of course. DMSO is famous for skin penetration (no, I have no idea if it does anything for arthritis). And while many of my compounds are not very physiologically active, I’d rather not dose myself with them to check those numbers. At the extreme end of the scale, a solution of cyanide in DMSO is potentially very dangerous stuff indeed. I’ve done cyanide reactions like that, many times, but always while paying attention to the task at hand.
Where DMSO really gets used is in the compound repository. That dissolves-everything property is handy when you have a few hundred thousand compounds to handle. The standard method for some years has been to keep compounds in the freezer in some defined concentration in DMSO – the solvent freezes easily,
down around where water does (Not so! Actually, I've seen in freeze in a chilly lab a couple of times, now that I'm reminded of that in the comments to this post. Pure DMSO solidifies around 17 to 19 C, which is about 64 F C - a bit lower with those screening compounds dissolved in it, though).
But there are problems. For one thing, DMSO isn’t inert. That’s another reason it doesn’t get as much use as a lab solvent; there are many reaction conditions during which it wouldn’t be able to resist joining the party. You can oxidize things by leaving them in DMSO open to air, which isn’t what you want to do to the compound screening collection, so the folks there do as much handling under nitrogen as they can. Compounds sitting carelessly in DMSO tend to turn yellow, which is on the way to red, which is on the way to brown, and there are no pure brown wonder drugs.
Another difficulty is that love for water. Open DMSO containers will pull water in right out of the air, and a few careless freeze/thaw cycles with a screening plate will not only blow your carefully worked out concentrations, it may well also start crashing your compounds out of solution. The less polar ones will start decided that pure DMSO is one thing, but 50/50 DMSO/water is quite another. So not only do you want to work under nitrogen, if you can, but dry nitrogen, and you want to make sure that those plates are sealed up well while they’re in the freezer. (As an alternative, you can go ahead and put water in from the start, taking the consequences). All of these concerns begin to wear down the advantages of DMSO as a universal solvent, but not quite enough to keep people from using it.
And what about the compounds that don’t dissolve in the stuff? Well, it’s a pretty safe bet that a small molecule that can’t go into DMSO is going to have a mighty hard time becoming a drug, and it’s a very unattractive lead to start from, too. That’s the sort of molecule that would tend to just go right through the digestive tract without even noticing that there are things trying to get it into solution. And as for something given i.v., well, if you can’t get it to go into straight DMSO, what are the chances you’re going to get it into some kind of saline injection solution? Or the chances that it won’t crash out in the vein for an instant embolism? No, the zone of non-DMSO-soluble small organics is not a good place to hunt. We’ll leave proteins out of it, but if anyone knows of a small molecule drug that can’t go into DMSO, I’d like to hear about it. Taxol, maybe?
+ TrackBacks (0) | Category: Drug Assays | Life in the Drug Labs
May 19, 2008
OK, drugs generally bind to some sort of cavity in a protein. So what’s in that cavity when the drug isn’t there? Well, sometimes it’s the substance that the drug is trying to mimic or block, the body’s own ligand doing what it’s supposed to be doing. But what about when that isn’t occupying the space – what is?
A moment’s thought, and most chemists and biologists will say “water”. That’s mostly true, although it can give a false impression. When you get X-ray crystal structures of enzymes, there’s always water hanging around the protein. But at this scale, any thoughts of bulk water as we know it are extremely misleading. Those are individual water molecules down there, a very different thing.
There seem to be several different sorts of them, for one thing. Some of those waters are essential to the structure of the protein itself – they form hydrogen bonds between key residues of its backbone, and you mess with them at your peril. Others are adventitious, showing up in your X-ray structure in the same way that pedestrians show up in a snapshot of a building’s lobby. (That’s a good metaphor, if I do say so myself, but to work that first set of water molecules into it, you’d have to imagine people stuck against the walls with their arms spread, helping to hold up the building).
And in between those two categories are waters that can interact with both the protein and your drug candidate. They can form bridges between them, or they can be kicked out so that your drug interacts directly. Which is better? Unfortunately, it’s hard to generalize. There are potent compounds that sit in a web of water molecules, and there are others that cozy right up to the protein at every turn.
But there's one oddity that just came out in the literature. This one's weird enough to deserve its own paper: the protein beta-lactoglobulin appears to have a large binding site that's completely empty of water molecules. It's a site for large lipids to bind, so it makes sense that it would be a greasy environment that wouldn't be friendly to a lot of water, but completely empty? That's a first, as far as I know. When you think about it, that's quite weird: inside that protein is a small zone that's a harder vacuum than anything even seen in the lab: there's nothing there at all. It's a small bit of interstellar space, sitting inside a protein from cow blood. Nature abhors a vacuum, but apparently not this one.
+ TrackBacks (0) | Category: Biological News
May 16, 2008
A good rule to follow: hold onto your wallet when two exciting, complicated fields of research are combined. Nature reported earlier this spring on a good example of this, the announcement by a small biotech called Primegen that they'd used carbon nanotubes to reprogram stem cells. (Here's a good article from VentureBeat on the same announcement, and there's an excellent piece on the announcement and the company in Forbes).
Stem cells and nanostructures are two undeniably hot areas of research. And also undeniable is that fact that they're both in their very early days - the amount of important information we don't know about both of these topics must be really impressive, which is why so many people are beavering away at them. So what are the odds of getting them to work together? Not as good as the odds that someone thought the combination would make a good press release, I'm afraid.
The PrimeGen web site, though a bit better than that VentureBeat article describes it, still has some odd notes to it. I particularly like this phrase: "PrimeGen’s broad intellectual property portfolio is founded on groundbreaking platform technologies invented by our team of dedicated and visionary scientists." Yep, we talk that way all the time in this business. You also have to raise an eyebrow at this part: "Disease and injury applications of PrimeCell™ include Alzheimer’s Disease, Cardiac Disease, Diabetes, Lupus, Multiple Sclerosis, Leukemia, Muscular Dystrophy, Parkinson’s Disease, Rheumatoid Arthritis, Spinal Cord Injury, Autoimmune Disease, Stroke, Skin Regeneration and Wound Healing." It'll mow your yard, too, if you're willing to participate in the next funding round.
The next sentence is the key one: "The extent to which stem cells can be used to treat injury and illness has yet to be fully evaluated. . ." You can say that again! In fact, I wouldn't mind seeing that in 36-point bold across the top of every stem cell company web page and press release. But what are the chances of that? As good as the chance that nanotechnology will suddenly going provide us a way to make the stem cells do what we want, I'm afraid. . .
+ TrackBacks (0) | Category: Biological News | Press Coverage
May 15, 2008
I was running a copper-catalyzed coupling reaction the other day when my summer intern asked me how it worked. I showed her the mechanism that the authors of the paper had proposed, but pointed out that it was mostly hand-waving. The general features are probably more or less right: the copper iodide presumably does form some kind of soluble complex with the amino acid that’s needed in the reaction mix, and that may well form some sort of complex with the aryl halide, which opens up the ring to nucleophilic substitution, etc. If this were an exam, I’d give full points for that one.
But a lot of these couplings are, as I pointed out to her, very hazily worked out. The Ullman reaction, in various forms, has been with us for many decades, and there are more variations on it than you can count. If it always worked reasonably well, or if people had any strong ideas about how it did so, the literature on it wouldn’t be in the shaggy shape it is. Copper chemistry in particular has been (simultaneously) a very useful area for people to discover new reactions, and a horrible trackless swamp for people trying to explain how they work.
All you have to do is look at the vicious exchanges between Bruce Lipschutz and Steve Bertz during the 1990s about whether such as thing as a “higher-order cuprate” exists. I have absolutely no intention of reconstructing this argument; I would have to be paid at a spectacular hourly rate to even attempt it. It's enough to say that the arguments raged, in an increasingly personal manner, about what state the copper metal was in, what ligands coordinated to it, and what the active form of these reagents might be (as opposed to what the bulk of the mixture was at any given time). It culminated in what must be one of the most direct titles for a scientific paper I've ever seen: It's on lithium! An answer to the recent communication which asked the question: 'if the cyano ligand is not on copper, then where is it?'. That's in Chemical Communications 7, 815 (1996), if you're interested (here's the PDF for subscribers). Bertz continued to shell Lipshutz's position past the time when any fire was being returned, as far as I can tell, and continues to work in the area. Lipshutz, for his part, hasn't published on the higher-order cuprates in some time (being no doubt heartily sick of the whole topic), but has kept up a steady stream of work on new reactions involving copper, nickel, and other metals.
So if well-qualified researchers, brimming with grad students, postdocs, and grant money, can argue for years about copper mechanisms, I'm going to stay out of it. As time goes on, I'm increasingly indifferent to reaction mechanisms, anyway. I want to get product out the other end of the reaction. And while there are times when knowing the mechanism can help reach that goal, those times do not occur as frequently as you might hope.
+ TrackBacks (0) | Category: Chemical News | Inorganic Chemistry | Life in the Drug Labs
May 14, 2008
I have a summer intern this year, and she has (so far) not caused anything to burst into flames. That’s the first thing you ask of a summer student, and the fact that she’s gotten several reactions to work is just a welcome extra. A summer with no laboratory bonfires will be a successful summer, as far as I’m concerned.
That’s because I’ve experienced the alternative, as I’ve detailed here before. If most of the lab fire stories you hear start out with the phrase “We had this solvent still. . .”, the rest of them all seem to begin with “We had this summer undergrad student. . .” (You can imagine the flame-filled end to any story that starts out with a summer student distilling some solvent – that Venn diagram leaves you with no way out at all).
No, after watching an undergrad next door to me kick a four-liter jug of pyridine all over the floor, causing a shimmering wave of unspeakable pyridine vapors to almost knock me off my feet. . .and after watching another one walk away for two hours after setting up a reduced-pressure DMSO still, which inadvertently turned into a high-pressure apparatus and blew DMSO and calcium hydride all over the inside of a hood. . .and after watching them charcoal reactions by plugging heating apparatus straight into the wall outlet instead of into the Variac. . .and, well, you get the idea.
I should add that I was no great shakes as a summer undergrad myself. I did a summer after my sophomore year with Tom Goodwin, but didn't get a great deal accomplished (through no fault of his!) Then after my junior year, I worked with Dale Boger, back when he was at the University of Kansas, but I mostly (and rather slowly) found a list of conditions that don't work for inverse electron demand Diels-Alder reactions. But although I spilled some generous amounts of solvent, I didn't set anything on fire.
No, we're going to have a calmer and more productive summer around here. I have my student working on a problem I've had a longstanding interest in, one that needs some variables chased down and figured out. With any luck, enough data will be generated to make for an interesting publication late in the year, and everyone will come out ahead.
+ TrackBacks (0) | Category: Life in the Drug Labs
May 13, 2008
Schering-Plough has had its share of troubles over the years, but the company has also seen itself saved by some pretty unlikely compounds. Vytorin (ezetimibe) is the example I’ve spoken about here, and if the drug doesn’t seem like a savior at the moment, well, you have to keep in mind that it was the biggest thing for them since Claritin went off-patent ten years ago.
Now there’s another one potentially coming up. Expectations are building for a thrombin receptor antagonist compound, SCH 530348. And I have a history with this one, too: while the labs down one hallway from me were discovering ezetimibe, down the other hallway they were laying the foundation for this one. There’s a big difference, though, in the way I saw the two.
This thrombin antagonist is an unlikely drug for several reasons. For one thing, its structure is not the sort of thing most medicinal chemists would go out of their way to make. But there’s a good reason for that: to a first approximation, it wasn’t made with medicinal chemistry in mind. 530348 is based on a natural product called himbacine, whose fame, such as it is, rests on its properties as a semi-selective muscarinic antagonist. And that’s how Schering-Plough got interested in this class of compounds; thrombin had nothing to do with it.
At the time (early to mid 1990s) the company had a team working on Alzheimer’s disease, and I’ll go ahead and mention again that I was one of the people involved. (Five minutes on SciFinder would tell you that, anyway). We were quite interested in selective muscarinic antagonists, particularly for the m2 subtype, and himbacine was at the time one of the more selective compounds with that profile. So one of the group leaders at the company, Sam Chackalamannil, decided to synthesize it and do some SAR around the structure.
That was no small undertaking. Himbacine’s not one of the most complex natural products by any means, but it’s no stroll to the beach, either, especially when compared to the usual sorts of drug structures. It took a lot of time, a lot of ingenuity, and (most importantly) a lot of effort to do it. And I. . .well, I thought this was a terrible idea.
I really did. By the time himbacine itself got made, the project team had muscarinic compounds that were more selective and more potent (and a lot easier to make, to boot). I would listen to Chackalamannil’s people presenting their long, difficult routes during meetings, and I’d sit there imagining the company going slowly bankrupt if everyone adopted this approach, the revenue slowly sinking as the number of JACS communications rose. I couldn’t see the point, and although I don’t think I ever quite had the nerve to say so to Chackalamannil himself (hi, Sam!), I said it to plenty of other people.
So, is it time for me to eat crow? Well, one plateful, at least. Some of the himbacine analogs hit in the high-throughput screen for thrombin activity, to everyone’s surprise, and some further compounds (now shed of their muscarinic activity) were even better. The drug discovery effort culminated in 530548, which now might be about to benefit a huge number of people and make the company a ton of money, if everything goes well.
Of course, if these things hadn’t hit in the thrombin assay, I could have remained secure in my opinion. After all, they were never worth very much as muscarinics, as far as I know. (Of course, our muscarinic compounds, in the end, never were worth very much as Alzheimer’s drugs, which is something to keep in mind). So that’s the question: how likely is it for molecules like this to work? It’s very hard to answer that, but given this data point, I guess the answer is “at least a little more likely than I thought”. The very fact that they didn’t look like most other things in the screening deck was probably in their favor. I still think that these compounds were a long shot, but this is a business that lives on long shots. This one came through, and congratulations to everyone involved.
+ TrackBacks (0) | Category: Alzheimer's Disease | Cardiovascular Disease | Drug Development
May 12, 2008
One of the reasons I starting this blog was that many people I met were interested in my job. Very few of them had ever talked to someone who discovered new medicines for a living, and a surprising number of them (well, surprising to me) had no idea of where medicines came from in the first place.
Talking to such folks (interested, but with no particular training in science) gave me some good practice in explaining the work. It helps that the kind of work I do is actually fairly easy to explain. There are a lot of details – as with any branch of science, the closer you look, the more you see – but I haven’t run across any key concepts that can’t be communicated in plain language. (It also helps that medicinal chemistry, as it’s actually practiced, uses an embarrassingly small amount of actual mathematics).
The toughest things to deal with are the parts of the field that actually touch on physics and math. My vote for the hardest everyday phenomenon to explain at anything past a superficial level is magnetism. So that means that explaining how an NMR machine works is not trivial. At least, explaining it in a way that a listener has a chance of understanding you isn’t – a while ago, I took up the challenge to try to explain it here in lay terms, and I haven’t done it yet, for good reason.
Explaining statistical significance is doable, but going much past that (principal components, the difference between Bayesian and frequentist approaches) takes some real care. And, of course, when you open the hood on chemical reactivity, the mechanisms of bond-forming and bond-breaking, you quickly find yourself in physics up to your armpits. It’s easier to stipulate, openly or by assumption, that there are such thing as chemical bonds, and that some of them are stronger than others. You don’t want to start answering a question about why one group falls off your drug molecule easier than another one does, only to find yourself fifteen minutes later trying to explain the Pauli exclusion principle. Counterproductive.
But the basics of medicinal chemistry can be sketched out pretty quickly, which makes some of the more curious listeners wonder, after a while, why we aren’t better at it. The best example I can give them is to advance a quick, hand-waving explanation of, for example, how compounds get into cells. Then I point out that that explanation is unnervingly close to the best understanding we have of how compounds get into cells. The same holds for a number of other important processes, way too many of them.
And that's why drug discovery is simultaneously frustrating and fascinating. We know huge numbers of things, great masses of detail that can take years to piece together. And it's not enough. Some of the most important puzzle pieces are still weirdly ill-defined, and there are probably others whose existence we haven't even realized yet. I'd be willing to bet that if you scanned the whole history of pharmaceutical discovery, you'd find people at every point thinking "You know, in any thirty years they should have all this figured out". But the years go by, and they - we - don't. Give it another thirty years, you think?
+ TrackBacks (0) | Category: Blog Housekeeping | Life in the Drug Labs
May 8, 2008
Every few years, you hear talk of a renaissance in natural products-based drug discovery. Well, this news should postpone the next round of optimism for a bit longer: Merck is cutting their natural products program entirely. They've had a long history in that area, but no more. That C&E News item includes an interesting detail:
"The company disclosed that it would also be closing its 50-year-old natural products drug discovery operation based in Madrid after a Merck executive inadvertently included the plan in a PowerPoint presentation to an audience that included Merck employees."
Smooth move. I'm sure some interesting e-mails were exchanged around Rahway and Madrid after that one. When, when will we get the powerful regulatory oversight of PowerPoint technology that the masses have cried out for these many years?
The main thing I remember about Merck's operation in Madrid was when they made a big splash about ten years ago with a weird looking indole/quinone thing that directly activated the insulin receptor. It made the cover of Science and all sorts of press releases, and my biology colleagues starting pestering me immediately. "Hey, you chemists keep saying that there's no point in running a small-molecule screen against the insulin receptor!"
Well, as it turned out, we were right. I assured my co-workers on the next floor that the Merck compound was one of the least likely drug candidate structures I'd ever seen, and that I'd be intensely surprised if it went anywhere. In fact, I told them, seeing it on the cover of Science actually decreased the likelihood that it was anything useful. If Merck really had a small-molecule insulin mimetic, I reasoned, the program would be a real stealth bomber, for fear of sending all sorts of other companies into the same chemical space too quickly. This one had all the signs of the people involved saying "You know, the only thing this stuff is good for is getting on the cover of Science"
So it proved, eventually. The compounds never went anywhere. It looks like the most recent natural product-derived compound that Merck got onto the market was Cancidas (caspofungin), and that was seven years ago. Mevacor (lovastatin) will stand as the modern high-water mark of Merck's natural product work - presumably from now on.
+ TrackBacks (0) | Category: Diabetes and Obesity | Drug Industry History
May 7, 2008
Update: here's the map that I was imagining, thanks to Andy in the comments section. It's on the Worldmapper site linked to below, but I missed it while putting the post together. Most of my speculations turned out to be reasonable, although Venezuela (for one) looks a bit better than I thought it would, and Iran looks a bit worse. Africa and the Islamic world are, as hypothesized, almost invisible.
I’d like to see a map of the world with country size dependent on the number of scientific publications and patents – perhaps you’d want to use publications per capita, or per educated capita. That's a cartogram, and although there are plenty of interesting ones on the web, I haven't found that one yet. The US would loom large, that’s for sure. Japan might be the most oversized compared to its geography, although Singapore would also be a lot easier to pick out. Western Europe would expand to fill up a lot of space, with Germany, England, and France (among others) taking up proportionally more room inside the region and (perhaps) Spain and Portugal taking up somewhat less. Switzerland would swell dramatically.
South America would be dominated, I think, by Brazil, even more than it is on the map. You’d be able to find Argentina and Chile, but I think some other countries (like Venezuela) would dwindle in comparison. Africa, as it does so often in maps of this kind, would appear to have been terribly shrunk in all directions, with a few countries – Egypt, South Africa – partially resisting the effects. Moving on to Asia, India would appear even larger than it is, unless you went for the per-capita measurement to cut it back down a bit, and China would be a lot more noticeable than it was ten (or especially twenty) years ago.
Another region that would basically disappear would be the Middle East and most of the rest of the Islamic world. Iran would hang in there, smaller but recognizable, and you’d be able to find Pakistan, too. But the Arab countries (with the minor exception of Egypt) would nearly vanish. The figures from the Organization of the Islamic Conference (the multinational group involved) show that from 1995-2005, the Islamic countries contributed 2.5% of all the peer-reviewed scientific papers. That’s all the more interesting when you consider the amount of potential funding that washes around that part of the world.
This disconnect has been noticed by the region’s scientists, as well it might. The OIC has designated a committee of science ministers to help with a multiyear plan for modernizing things, but no one’s sure if any real money will be forthcoming. According to this Nature article (headlined "Broken Promises"), the OIC countries allocate less than 0.5% of their GDP to research and development. Most of the money promised just to fund that science committee never showed up. Lip service is, of course, a feature of politics (and politicians) everywhere, but I don't think I'm out of line if I suggest that it's very close to an art form in that part of the world.
And that's a very short-sighted approach. Many of these countries are sitting on huge amounts of money at the moment, which should be invested against the day that their oil runs out (or against the day that the world decides that it's not as desperate for oil as it once was). That latter day will, presumably, be hastened along by the countries who spend more on research. . .
+ TrackBacks (0) | Category: General Scientific News
May 6, 2008
Several recent papers in Neurology offer some interesting ideas on Alzheimer's disease. The one that's getting some headlines today suggests that long-term use of ibuprofen has a protective effect against the disease. Actually, the authors looked at all sorts of non-steroidal antiinflammatory drugs, but the correlation was strongest for ibuprofen. (That may be just because it's used so much, however, and not some intrinsic property of that specific drug). Interestingly, although some NSAIDs have been shown to inhibit formation of beta-amyloid (the protein fragment implicated for many years in Alzheimer's), no particular effect was seen for that class of drugs versus the other NSAIDs.
There's long been a suspicion that a lot of Alzheimer's pathology is driven by inflammation cascades, and although evidence has been mixed to date, this would seem to be good evidence for that idea. (More on this in another post). This wasn't a prospective study - they didn't enroll people just to test this idea - but a huge number of VA patients were studied retrospectively, and the authors appear to have done as much as possible to control for other variables. Of course, in an observational study like this one, you can't control for the biggest possible confounding factor: what if there's something about patients who end up taking NSAIDs more often that also keeps them from developing Alzheimer's? That certainly can't be ruled out, but I don't think there's room for that in most of the headlines. It's going to be tempting for worried patients to start taking ibuprofen to prevent dementia - and that just might work, still - but we really can't be sure without plenty of prospective trial data.
Of course, not everything is good for preventing Alzheimer's. You can apparently add statins to that list. An examination of aging Catholic clergy (mostly nuns) showed no correlation at all between statin use and the development of the disease. This is one of those long-running studies that ends with death and subsequent brain histopathology, too, so it's pretty hard to argue with. Intellectually demanding work, though, does perhaps show a protective effect. Interestingly, this effect was even stronger in the cohort of patients that scored lower in assessment of overall intelligence, which makes sense in a way. (Cue the arguments about whether general intelligence exists, whether it can be measured, and if so, whether it's being measured in the correct way).
On the ever-profitable herbal front, you see all sorts of claims made for Gingko biloba extract and cognitive function, and there are a lot of contradictory studies (many of which, unfortunately, aren't worth much). This latest one won't help much - in the intent-to-treat analysis, no effect was seen. When they controlled for how well patients stuck to the treatment, then some correlations emerged between taking the extract and slower rates of memory loss. Unfortunately, a correlation (at the same level of significance) emerged with stroke and associated TIAs. My prediction: the ginkgo biloba sellers will trumpet the first set of statistics, assuming they need recourse to any data at all, and ignore the second one completely.
Such is the current state of Alzheimer's. To be honest, none of these studies (or most of the others in the same issue) would have been out of place back when I was working in the field in the early 1990s. The field awaits its breakthrough, and has been waiting for a long time. . .
+ TrackBacks (0) | Category: Alzheimer's Disease
May 5, 2008
We order chemicals from all sorts of suppliers – big, reputable outfits like Sigma-Aldrich-Fluka all the way down to places that none of us even have heard of before. In those latter cases, the primary question is always whether or not the reagent will actually show up, and the secondary one is how long it’ll take. There are some of those small suppliers who pad their catalog with things that aren’t exactly available, not yet – but hey, they will be if someone orders them. They’ll just tell you it’s back-ordered, and tell someone in the lab to get cracking.
And when you get your compound in, they arrive in various forms. Glass or plastic bottles are the norm, naturally, with the occasional irritating (but presumably necessary) sealed-glass ampoule. But after some time in the lab, you can tell some of the suppliers from across the room. For example, the Japanese company TCI sends a lot of its compounds in normal-looking glass bottles, but these are first put inside capped plastic containers, like larger translucent versions of the ones that 35mm film probably still comes in. And once you taken them out, their glass bottles have these odd plastic labels on them which come up around the screw cap and are perforated around the cap’s border. On the labels, they also have that same thin, fussy, serif font that the Japanese have been using for Roman-style letters for decades (since the war?) and is only in recent years disappearing from their world.
Maybridge, British vendor of all kinds of odd stuff, often sends its compounds in these weird little squat brown-glass bottles with small black caps on them. They must have the world supply of that particular bottle shape tied up, since I’ve never seen one anywhere else. It most resembles the small bottles that solutions for injection are packaged in. So many of the company’s catalog items are in such bottles (or even smaller ones) that it seems wrong somehow when you come across a huge (huge for Maybridge) hundred-gram bottle with their label on it.
Most of the suppliers have neutral-sounding names like those above. They could be chemical companies, vendors of kitchen cabinets, real estate trusts, who knows: Maybridge, Oakwood, Lancaster (now gone, and their blue labels with them). And some of them are unmistakably in the chemical supply business, but rather blandly named (Pharmacore, for example, or Chembridge). Some names are, perhaps, mistakes: the namers of Asinex, for example, seem to have been unaware that the closest Engish word is “asinine”, which means that they have to hope for people to pronounce that “s” as if it were a “z”. (I should mention that both Asinex and Chembridge indulge in one widely hated practice: putting no useful information on their tiny vials other than a catalog number or bar code – Bionet (Key) is a similar offender).
In this dull company, I’m always glad to see the weirdos. I miss the now-purchased-away British supplier called Avocado – green labels, naturally – and always wondered who named them and why. Tyger Scientific makes me wonder if there’s an English major in somewhere at their founding, fond of William Blake. And there’s one company that came into the industry under the glorious name of, I am not making this up, “Butt Park”, and many are the chemists they’ve made stand puzzled in front of the supply cabinet. (I'd provide a link, but I can't find a direct one, and Googling it can be a real minefield).
I refuse to consider that name a mistake. That's a feature, not a bug, and I wish that there were more competition in the category. I would proudly and purposely send business to, say, Batshit Chemical Supply, Inc., even if they back-ordered me every single time.
+ TrackBacks (0) | Category: Life in the Drug Labs
May 2, 2008
One recent drug industry setback I haven't noted around here - well, OK, to be more specific, it's a Merck setback, and boy must they be getting sick of those - is the FDA's "not approvable" letter for the Singulair/Claritin combination pill.
As the folks at the InVivoBlog note, it sure was hard, from one perspective, to see that one coming. After all, Claritin (loratadine) has an exemplary safety record and has been on the market for many years now, and Singulair (montelukast) has been selling in the billions of dollars as a stand-alone drug. No doubt many people have taken, and are taking, the two as separate pills. So you combine them and get a "not approvable": right.
The In Vivo people speculated that this might be a safety problem, since the agency has been mighty jumpy about that area recently, but Merck has now told them that safety and tolerability weren't raised in the FDA letter.
Well, what does that leave? Manufacturing? Hardly possible, given the way that these two drug substances are already being cranked out. That, as far as I can see, leaves good old efficacy. You could always argue that putting the two compounds into one pill improves patient compliance, etc., if the combination itself is useful in the first place. But in this case, I'd guess that the problem is that the combo has turned out to offer no benefit over either drug taken alone. Hard to make a case under those circumstance, it is.
And if you look into the history of a Singulair/Claritin idea, that appears to be just the problem. As the Wall Street Journal's Health Blog notes, the companies had already found no benefit for seasonal allergies, compared to either drug standing alone. Supposedly they were able to come up with some sort of nasal congestion data (what a joy that must be) that showed an edge this time, but yikes - how desperate do you have to be to take things to that point, after you've already seen no benefit in the main endpoints?
So why are Merck (and Schering-Plough) spending money on this kind of last-gasp line extension? Surely there are better places to burn cash. I've never been sympathetic to the argument that money spend on promotion is somehow stolen from R&D, but this sort of thing is another matter. Stupid R&D most definitely steals money from smarter R&D, and here's some of it that's made off with the swag.
+ TrackBacks (0) | Category: Drug Development
May 1, 2008
Drug Discovery Today has the first part of an article on the history of the molecular modeling field, this one covering about 1960 to 1990. It’s a for-the-record document, since as time goes on it’ll be increasingly hard to unscramble all the early approaches and players. I think this is true for almost any technology; the early years are tangled indeed.
As you would imagine, the work from the 1960s and 1970s has an otherwordly feel to it, considering the hardware that was available. And that brings up another thing common to the early years of new technologies: when you look back on them from their later years, you wonder how these people could possibly have even tried to do these things.
I mean, you read about, say, Richard Cramer establishing the computer-aided drug design program at Smith, Kline and French in nineteen-flipping-seventy-one, and on one level you feel like congratulating his group for their farsightedness. But mainly you just feeling like saying “Oh, you poor people. I am so sorry.” Because from today's perspective, there is just no way that anyone could have done any meaningful molecular modeling for drug design in 1971. I mean, we have enough trouble doing it for a lot of projects in 2008.
Think about it: big ol’ IBM mainframe, with those tape drives that for many years were visual shorthand for Computer System but now look closer to steam engines and water wheels. Punch cards: riffling stacks of them, and whole mechanical devices with arrays of rods to make and troubleshoot stiff pieces of paper with holes in them. And the software – written in what, FORTRAN? If they were lucky. And written in a time when people were just starting to say, well, yes, I suppose that you could, in fact, represent attractive and repulsive molecular forces in terms that could be used by a computer program. . .hmm, let’s see about hydrogen bonds, then. . .
It gives a person the shudders. But that must be inevitable – you get the same feeling when you see an early TV set and wonder how anyone could have derived entertainment from a fuzzy four-inch-wide grey screen. Or see the earliest automobiles, which look to have been quite a bit more trouble than a horse. How do people persevere?
Well, for one thing, by knowing that they’re the first. Even if technology isn’t what you might dream of it being some day, you’re still the one out on the cutting edge, with what could be the best in the world as it is. They also do it by not being able to know just what the limits to their capabilities are, not having the benefit of decades of hindsight. The molecular modelers of the early 1970s did not, I’m sure, see themselves as tentatively exploring something that would probably be of no use for years to come. They must have thought that there was something good just waiting right there to be done with the technology they had (which was, as just mentioned, the best ever seen). They may well have been wrong about that, but who was to know until it was tried?
And all of this – the realizations that there’s something new in the world, that there are new things that can be done with it, and (later) that there’s more to it (both its possibilities and difficulties) than was first apparent – all of this comes on gradually. If it were to hit you all at once, you’d be paralyzed with indecision. But the gap in the trees turns into a trail, and then into a dirt path before you feel the gravel under your feet, speeding up before you realize that you’re driving down a huge highway that branches off to destinations you didn’t even know existed.
People are seeing their way through to some of those narrow footpaths right now, no doubt. With any luck, in another thirty years people will look back and pity them for what they didn’t and couldn’t know. But the people doing it today don’t feel worthy of pity at all – some of them probably feel as if they’re the luckiest people alive. . .
+ TrackBacks (0) | Category: Drug Industry History | In Silico | Who Discovers and Why