Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
There’s a rather embarrassing note leading off the latest issue of Angewandte Chemie. Two recent papers (2007, 2006) had reported the synthesis of some rather weird 12—membered rings, the diazaannulenes shown here. They made them from dinitrophenyl pyridium salts and primary amines, with the pyridine ring unraveling oddly along the way. Too oddly.
Professor Manfred Cristl of Wurzburg, who apparently knows his pyridinium chemistry pretty well, recognized this as an old way to make further pyridinium salts, not funky twelve-membered rings. He recounts how over the last couple of months he exchanged awkward e-mails with the two sets of authors, pointing out that they seem to have rediscovered a 100-year-old reaction, and have they really looked at their spectral data closely, eh? Both groups have admitted their mistake – the data match up wonderfully with the known pyridinium compounds, unfortunately, so there’s really no other way out – and retractions are appearing.
He raises some broader points, though: first, there’s the obvious problem that this whole thing should have been caught by better literature searching and analytical chemistry. These arresting structures deserved more than a quick NMR and LC/MS, and they deserved more than what appears to have been a not-very-thorough look through the prior art. There’s a bigger problem, though, which fans of the LaClair imbroglio will enjoy. Note the exasperated tone of the following, which comes across in a very German fashion:
”A further question refers to the reviewing of the above papers. Presumably, at least four referees were entrusted with this duty, two of Angewandte Chemie and two of Organic Letters. They have provided conclusive evidence for their lack of knowledge of heterocyclic chemistry. However, the referees are probably chosen by the editorial offices according to the specialization of the corresponding authors and, thus, have the same gaps in the knowledge as the authors. In consequence, if the authors present results remote of their main projects, extreme misjudgments are inevitable. . .”
So, once again, Angewandte Chemie's reputation is upended by sloppy refereeing and editing. This time, though, they run an article berating themselves. Progress, I'd say. . .
(Note: update on this story here, and why it might have happened, here).
Novartis must wonder what they did to deserve this one. A few years ago, it looked as if they ruled the potentially lucrative world of dipeptidylylpeptidase-IV (DPP-IV) inhibitors for diabetes. (Note - name of enzyme corrected after brain hiccup - DBL). Novartis seemed to be the first big company to come up with good chemical matter in the area, and they published a whole string of papers while their lead compound went through the clinic.
Then came trouble. Merck turned out to have a big program of their own in the area, which in Merckian fashion they’d kept very quiet about, and they actually beat Novartis to the FDA. And then they beat them to market, because the agency had some questions about the Novartis compound. Those questions have done nothing but multiply. Now the problem appears to be liver tox, one of the last things the diabetic population needs. It’s looking very likely that Novartis’s compound may never get to the market in the US at all.
So here’s a question: if both compounds had made it to market, wouldn’t the people who tally up lists of “me-too” drugs have considered the first compound (from Merck) to be the original, and the Novartis one to be the copycat? After all, they target the same enzyme for the same disease in the same way. (I should mention that a DPP-IV inhibitor itself is just the sort of thing the industry is supposed to be turning out, a completely new way to treat a major and growing public health problem, but we'll pass over that for now).
But these compounds were developed more or less simultaneously, with the two companies racing each other to the market. It’s not like either company sat back and watched the big profits roll in, and said “I need to latch on to some of that – let’s make one of those, too.” The whole thing was done on a risk basis, because while the biochemical rationale behind DPP-IV inhibition makes sense, a lot of things make sense and still go nowhere. No one really knew how the drugs would perform, either in the clinic or in the marketplace.
And take a look at the problems that the Novartis compound has. Like so many other toxicology hits, these came out of the cloudless sky. Well, actually, it’s more accurate to say that the sky over the toxicologists is never cloudless, because you never know what’s going to happen. In this case, Novartis has taken an especially painful and expensive beating, since the drug had advanced so far before the problems began to make themselves clear.
I’d like to ask some of the critics of the industry what they think about this situation. Me-too drugs are a particular arguing point with many of these people, so here we go: does that term apply in this case? If not, then why not? Should companies go after the same target in the same way at the same time? If not, then why not? How do we deal with the fact that any compound can fail at any time, other than turning companies loose to compete with each other and take as many shots at a target as possible? Do you have a better solution – and if not, well, then, why not?
There’s an article in the latest Drug Discovery Today which takes off after the “Rule of Five” and its application to drug discovery. The author’s not saying anything that hasn’t been said before, though – first under the breath, then openly. But it bears repeating:
”The simplicity of these criteria to remove outlier molecules using software, made them very easy to implement. Thus, the Ro5 moved rapidly in the hierarchy of medicinal chemistry concepts from being a set of ‘alerting’ criteria in the minds of the medicinal chemists to a commandment engraved in the high altars of ‘do's’ and ‘don’ts’ of drug seekers. I am not a medical doctor nor am I a savvy drug-discoverer; I am just an apprentice. However, I suggest that ten years after the publication of the Ro5, it might be time for a collective reflection.
Currently, the Ro5 is used almost indiscriminately. I think that we should be very cautious about relying too heavily on these criteria, for two reasons. First, it is worth pointing out that there are examples of successful drugs (i.e. Lipitor™, Atorvastatin™) that are notable violators of the Ro5 and we and others should never underestimate the impact of the highly improbable event in our theories and preconceived notions. Second, it is well recognized in the drug discovery field that in spite of these magic rules, and the introduction of ingenious methods to discover new drugs, the number of new chemical entities reaching the market has remained constant or continued on a downward trend. One may ask: Where is the power of those magic rules? Are they helping us to focus on the right molecules? Or are they preventing us from discovering new opportunities? Do they represent something deep and profound about drug discovery? Or are they preventing us from a deeper understanding of the drug discovery variables?”
The problem is, this sort of article is coming along several years too late. I disagree with the word “indiscriminately”, for one thing. It’s actually my impression that Rule-of-Five dogmatism has been on the wane for a while now. I’d put the peak at about five to eight years ago, myself (anyone out there have the same experience?) Perhaps it’s the lack of any strongly noticeable increase in our success rates that’s calmed things down. Projects are still wiping out due to odd and unexpected pharmacokinetic problems, for example, where the more naïve (or hopeful) devotees of the rules might have looked for an improvement. (This would be a good place to note that Chris Lipinski himself never was as hard-core about his criteria as some of his followers, a pattern which is far from unknown).
So it’s clear that success can’t be ensured by just matching a few basic properties of drugs that have been successful in the past, not that this should be a surprise. People are always looking for the easy fix (who can blame them?). The Lipinski rules were a favorite among middle management, more than for the people at the bench, since they used measurable criteria to produce something else that could itself be measured. Nothing is dearer to a manager’s heart, and it’s too bad that the results haven’t been more exciting.
I liked better an analogy made later in the paper:
”I see the historical successes of our illustrious predecessors more like the discoveries of early sky watchers. They discovered the early stars and planets and through careful observations were able to trace their passages through the sky. Like them, we have discovered certain patterns in the firmament of drug discovery as they relate to various chemical entities with therapeutic properties, and characterized the molecules in the biological universe to which they relate. However, I would not go any further than that. In trying to understand the universe of drug discovery, I am not even ready to affirm whether we know with certainty if the system is geocentric (ligand at the center, as it would be suggested by medicinal chemists) or heliocentric (target in the center as proposed by biologist, macromolecular crystallographers or geneticists). Moreover, although we have a sense of what the forces that bring the two together are, robust calculations that can accurately predict how one relates to the other still elude us. We know there is a key parameter (i.e. Ki, their relative affinity) that connects this crucial pair but we cannot calculate it accurately. Consequently, the number of experimental observations (in vitro and in vivo) relating the two dominant poles of the drug-discovery universe is extensive and continues to grow in the existing databases (public and proprietary) at an exponential rate. All these measurements remind me of the careful observations made by Tycho Brahe (circa 1600) that were crucial for Kepler's insights.”
He’s right that in medicinal chemistry we’re still fundamentally an observational science. (That should have been obvious given how little math any of us need to know). We have broad theories, trends, rules of thumb – but none of it is enough to help us very much, and we’re constantly surprised by our data. That can be enjoyable, if you have the right personality type, but it sure isn’t restful, and a lot of the time it isn’t very profitable, either.
And as an amateur astronomer, I like the analogy, although it worries me a bit. Kepler (and Newton) did indeed break the impasse over the motion of the planets by explaining the available data through relatively simple (but still unexpected and non-obvious) mathematical theories. We’re not going to be so lucky, since the systems we’re studying are so much messier and subject to so many more influences. But there is room for some sense to be made out of what we’ve observed, more sense than we’ve made of it thus far, at any rate.
Understanding is not going to come down on us like a descent of holy fire, which must have been what the laws of gravity and planetary motion were like, but it won’t have to. I’m not expecting an airtight theoretical approach to predicting human blood levels or toxicity, not anytime soon. But considering that we lose amazing amounts of money because we can't predict that stuff at all, I think we're actually going to be pretty easy to impress.
I did something today that I haven’t done in several years: a vacuum distillation. That used to be a larger part of every chemist’s life, but advances in chromatography have eaten into a lot of the older techniques for purifying compounds. Recrystallization is another obvious example of a lost art, one that I’ve steadily heard characterized as such for the last twenty years. Well back before my time, people purified their liquids through distillation and their solids by recrystallizing them, and that was that.
Both of those can still be the best way to go, depending on your compounds. When you come across these methods in the older literature, you always have to ask yourself if you should stick with them, or if a chromatography would do the job more easily. Today, though, it was a modern procedure I was following, so distillation it was.
For the non-chemists in the audience, here's how you do it. You rig a glass apparatus onto the top of your round-bottom flask of gunk - there's one at the left. This "still head" has a short neck coming up, a bend that accommodates a thermometer, then a cold-water circulating condenser built in right before a tube to deliver the drops of distilled product. Along that region there's another fitting to hook the vacuum pump up.
Pulling a vacuum on the system lowers the boiling point of the liquids inside it - one of the reasons you have to adjust recipes at high altitude, actually. (If you lower the pressure enough, you can get water to boil at room temperature). Without that lowering, many compounds would have to be heated up so much to distill them that they'd start to decompose. Heating things to that point isn't much fun, in any case. Far better to pump things down and take them over at a more reasonable temperature.
The usual technique is to pump things down first, just to get any bumping and bubbling out of the way as leftover low-boiling solvents and dissolved gases clear out. Then you gradually increase the temperature on the distillation pot until things start to boil. You can see the condensation form on the inside of the still head as things get going, then drops start to condense and drip off the end of the thermometer back into the pot. A bit more heating and things make it over to the condenser, roll down the collection tube, and into the receiver flask.
Of course, you may have more than one thing in that pot. The stuff that's boiling out will eventually all come over, and as you heat things up some more the next higher-boiling component will then start to boil and the process repeats. That's why they make adapters that can fit several receiver flasks - these things will turn to accomodate different fractions, one after the other. The common lab name for these is a "cow" (Germans call them "spiders").
When you're finished, you generally have one or more flasks full of clear liquid on the far end of things, and the distillation pot generally looks just awful. All the high-boiling impurities have concentrated, and the resulting mix has been thoroughly cooked. It's a dramatic illustration of what you've accomplished - dark brown sludge separated out from pure product. Distillation makes you feel as if you've earned your lunch break.
How is 'what's made' influenced by the synthetic knowledge of the individual med chemist? I would guess that with all the pressure on targets that you've written about, there must be some level of sub-conscious selection based on ease of synthesis, so the difficult structures either never get made or get made later. . .(but) difficulty is a subjective term. The better the chemist the more molecules fall into the easy category. . .
. . .One thing I've noticed is the explosion in bi-aryls since the Suzuki and related chemistry came along. Is this due to a sudden realsiation that bi-aryls could be good molecules or is it due to the fact that Suzuki chemistry is easy?
I've wondered about this one myself, as have many other chemists I've known. It's true that as synthetic chemists we tend to go for the low-hanging fruit; I don't think that anyone could deny it. And that's largely due to pressure to produce results, although I wouldn't rule out laziness, either (never rule out laziness).
But you can often get pretty interesting things to happen by doing simple reactions and small changes. Think about the number of times you've seen activities totally altered by one methyl group, or the metabolic problems that have been fixed by adding a para-fluoro. We don't feel as much need to move into new territory as we might.
As for variation between individual chemists, that's why you want to hire a set of people with diverse backgrounds. (And no, I don't mean HR-style diversity, I mean chemical and scientific diversity). The literature is big enough and varied enough so that people can have a lot of experience and still not overlap with their colleagues much in their favorite reactions and structures. People will still go for the easy stuff, but with any luck there will be enough different definitions of "easy stuff" to keep people from piling up too much.
But I think that this factor isn't quite as big as it used to be, what with the advent of modern literature searching. People can pull out all sorts of reactions from the literature and give 'em a try - it's hard to remember that it used to be quite a bit harder to do that. So what do my industrial readers think - do we just make the easy stuff? If we do, is that a problem? How much is "easy" a function of who's doing the chemistry? And has that changed over time?
I've had several requests, so here we go. This is a slightly modified version of Craig Claiborne's recipe in the New York Times Cookbook. He was a Southerner himself, Claiborne, so he knew his pecan pie:
Melt 2 squares (2 oz.) baking chocolate with 3 tablespoons butter in a microwave or double boiler. Combine 1 cup corn syrup and 3/4 cup sugar in a saucepan and bring to boil for 2 minutes, then mix the melted chocolate and butter into it. Meanwhile, in a large bowl, beat three eggs, then slowly add the chocolate mixture to them, stirring vigorously (you don't want to scramble 'em with the hot chocolate goop).
Add one teaspoon of vanilla, and mix in about 1 1/2 cups of broken-up pecans. You can push that to nearly two cups and still get the whole mixture into a deep-dish pie shell, and I recommend going heavy on the nuts, since the pecan/goop ratio is one thing that distinguishes a home-made pie. Bake for about 45 minutes at 375, and let cool completely before you attack it. Note that this thing has an extremely high energy density - it's not shock-sensitive or anything, but make the slices fairly small.
Well, the drug labs are emptying out today, like many other workplaces around the US. My American readers will be celebrating Thanksgiving tomorrow (and Friday as well, in most cases), and so will I. This will be our first since the new job and the move, and I have a turkey to thaw and a chocolate pecan pie to make. Tomorrow I'll be in charge of roasting the bird, since my wife has washed her hands of the oven in this house until we can get it replaced. I've had my incidents with it too, but I figure that if I can make 100 grams of alkynylaluminum reagent without setting anything on fire (a near thing, though, as I recall), then a sixteen-pound non-pyrophoric turkey should be no problem.
Last Thanksgiving I was doing roughly the same things, but in the knowledge that in two months I'd be out of a job. I prefer this year, hostile kitchen appliances and all. I hope that everyone reading this and celebrating the holiday has a good time at it, and for my readers outside the US, the best to all of you, too. You don't have to eat turkey to be glad for what you have (hey, in some countries serving a turkey would actually make that more difficult). I'll see everyone on Monday, at which point science will get up off the couch and start marching on again.
I had a hard drive failure the other day, which naturally got me to thinking about backing up data, and about the times I’ve been more paranoid about it. I wrote my PhD dissertation back in those far-off days (1988) when you could put Mac versions of Word and Chem-Draw on one 3.5-inch disk (yes, that was possible, and I still have the disk to prove it). But I went to the disk-swapping trouble of putting my dissertation-in-progress on a separate floppy.
So there I was, with a couple of week’s worth of dissertation draft on my floppy disk, when one fine day I insert the thing into the slot, and. . .it can’t be read. Hrm. I try other machines. I try them all. None of them can read the disk, under any conditions. It slowly dawns on me that my two weeks of work have evaporated, and a little later it dawns on me that things could have been much, much worse. I converted to the Backup Religion.
Grad students writing up tend to get a bit paranoid under the best conditions. Once I made my backup copy, I realized that I might run into a problem with the floppy drive – what if it subtly ruined my disk? Then one floppy would apparently be bad, so I’d feed the next one in, and the evil drive would chew that one up too. Hmm – better have three copies. I decided to keep one in my lab dsek, one at home, and one in my car. But then I started thinking of the unlikely – but still possible! – combinations of drive failures, fires, accidents, etc. that could still wipe me out. In the end, I had, I think, five separate copies of the dissertation in progress: one back at my apartment, one in the car, one in the lab desk, one back in a drawer by my hood, and one in my coat. I never needed any of the backups at all.
But it was a comfort to know that they were there, and mentally I needed all the backup capacity I could get in those days. Late one night I was awakened by a host of fire trucks roaring down the street. I lived only a quarter-mile from the chemistry building, and I found myself wondering, there at three in the morning, if that’s where they were headed. Ah, but I had my latest dissertation disks. But. . .I also had all the hard copies of my NMRs there in my lab. Aargh. (I should note that digital backups of NMR data were quite rare back in that era, at least in much of academia). What if the building caught on fire?
Worse, what if I’d been the cause? Had I really turned off that heating mantle when I left at midnight? Or did I just think that I had? Wasn’t there a bottle of hexane in my hood? (I did mention that this was three in the morning, right? Why the brain gets into these loops at that hour is a mystery, because that kind of thinking is normally alien to me, as my wife, to whom it’s second nature, will tell you). So I sat there, wondering if my lab and my data were at that moment going up in flames, until I finally rolled out of bed and called the lab. Ring. Ring. Ring. “Hello?” I recognized the voice – it was Randy, down the hall – but I suddenly realized that I didn’t know what to say to him. “So the lab’s not on fire?” didn’t seem like a good conversation starter, so I just hung up, and went back to sleep.
The next day I made my late-morning entrance into the lab, and ran into Randy. “How late were you here last night?” I asked him. “Oh, really late”, he said, and looked at me. “How did you know?”, he asked, and I looked embarrassed. “Hold it,” he said, “that was you, wasn’t it? You must have heard all those fire trucks going past! Thought the lab was on fire, didn’t you?” All I could do was turn red, because he had me.
Back in 2005, I worried about taking a new drug to market that had a completely new central nervous system mechanism: Acomplia (rimonabant). CNS makes me nervous. I used to work in the area, and I have a healthy respect for how little we know about it. So when you come in with something new, you have to be worried about what's going to happen, and whether your clinical trials are going to be enough to tell you about it.
And sure enough, the long, long delay at the FDA for the drug, which was (in theory) supposed to be approved in the first half of 2006, turned out to hinge on CNS side effects, among them "suicidal ideation". Now a meta-analysis has come out in The Lancet which suggests that patients taking the drug in Europe (one of the few places you can take it) have a much higher risk of depression.
You have to be careful with meta-analyses. But this one's noteworthy because, as the authors point out, depressed mood was an exclusionary factor for the studies concerned. Yet even after winnowing out those patients, the study patients seem to have been 2.5 times as likely to drop out of the trials due to depression as compared to the placebo groups. The studies totaled 2503 patients on the drug, and 1602 in the placebo groups. Depression showed up in 74 and 22 cases in those groups, respectively, which does seem to be a real effect, especially when you start by excluding anyone who seems depressed.
Compare that with the Avandia meta-analysis that has made much so much news (and come close to sinking the drug completely). Out of 14,000 patients, that one had 86 cardiac events in the treatment groups and 72 in the controls, and this in a population with underlying cardiovascular trouble. Depression is not as serious an outcome as a heart attack, to be sure, but it's nothing you'd sign up for, either. Sanofi-Aventis should stop being upset that they haven't gotten the drug on the market here, and start being glad that the lawyers here didn't get a chance to strip a few billion dollars off of them.
Do you have what it takes to run a med-chem project? Take this simple test and find out:
1. You have a compound with a suspicious reading in a hERG assay, indicating possible cardiovascular trouble later on. Do you:
A) Brace yourself to scale up compound for dog cardiovascular tox (and brace the budget for paying for it), wondering if the animal group has gone through with that threat to switch to 60-kilo Irish wolfhounds.
B) Brace yourself to start your SAR over, most of the way back from scratch, because your compound doesn’t fit anyone’s hERG model (what are the odds that you could miss them all?) and you have no idea of what to fix first, or
C) Make a pest of yourself by pointing out all the historical compounds, now on the market and not causing trouble, that would have been dumped by running this same assay and taking it this seriously.
2. Your lead compound has come back positive in an Ames test. A re-test was negative. Do you:
A) Brace yourself to fight for your compound’s right to live, even though it will always have the Mark of the Beast on it for having failed that first Ames.
B) Brace yourself to start your SAR over, most of the way back from scratch, because there’s no such thing as an Ames-positive structural model anyway, and you have no idea of what to fix first (and no conviction that anything needs to be fixed at all, except that pesky Mark of the Beast business), or
C) Make a pest of yourself by pointing out that a good percentage of the things on sale at the supermarket wouldn’t pass an Ames test either, especially at your tox doses.
3. You have a compound that you need intravenous blood levels on, but it doesn’t want to dissolve in any of those namby-pamby iv vehicles. Do you:
A) Brace yourself for running the thing in the closest thing that looks like it might work, at the lowest concentration, even though it might not give you any data you can use (hey, at least you can say that you tried).
B) Brace yourself to start your SAR over, adding morpholines, methoxyethyls, all those solubilizing groups that make the structure say “I Used to be a Brick, And I Probably Still Am!”, even though you can’t think of a place to put them without killing your activity, or
C) Make a pest of yourself by arguing for some weirdo vehicle that you pulled out of the literature (Dr. Pepper, hair gel, balsamic vinegar, etc.), which your PK people have never heard of and would rather shave their heads than take the time to validate.
After extolling the joys of finding things out in the post directly below, I couldn't resist linking to this story for those who haven't seen it. Now, this guy is really out there on the edge, and I wish him well with his theory (available here on Arxiv for the mathematically inclined). What I especially like is that he's ready to make some testable predictions.
You know, when Feynman met Dirac, the first thing he mentioned to him was how wonderful it must have been to discover the equation that bears his name. If Garrett Lisi's theory can predict particles out of thin air the way Dirac called the positron, he'll be remembered the same way. Good luck to him, and to those like him.
My lab and I have plans to start experimenting with several compound classes that we’ve never handled before. In fact, for some of these, no one’s handled them before. Some of these are not only novel as in patentable, for which fairly small changes can suffice, but novel as in what-the-heck-is-that. I couldn’t be happier.
Honestly, I have no idea of what I’d do with a job where I knew what was going to happen next. Years of science have ruined me for a lot of other occupations. I was putting some of these up on the board the other day, and mentioning what I’d like to try. “Do you know if you can do that?” someone asked, and I answered that no, I didn’t, and as far as I could tell, no one else did, either. I can draw out a bunch of reasonable-looking reactions, but the structures themselves may well have other ideas.
The first time I realized that I was in new territory, although to a much lesser degree, was back in my first year of graduate school. My first few reactions generated things that were already known in the group, naturally, and then I made some model systems that were already known in the literature. But pretty soon I remember making a compound that I realized just flat-out wasn’t in Chemical Abstracts, because no one had ever had the need to make it before. (As far as I know, no one’s had any need to make the stuff again, either – if someone has, I hope they got more use out of it than I did!) But there it was, in a flask: something that had never existed before.
My list of such compounds is now rather lengthy. In the drug industry, naturally, we spend just about all our time making compounds that haven’t existed before. (If they’ve been exemplified somewhere, you can forget about a patent on the chemical matter itself). Our livelihoods depend on cranking out thousands upon thousands of compounds that no one else has made. I haven’t seen the figures, but I’d guess that a large fraction of the new small organic molecules that get registered every year in Chemical Abstracts are from pharma. Those patents with the three-hundred-page experimental sections do start to add up.
This latest stuff, though, goes a few steps beyond that, to whole compound classes that no one’s touched yet. I may well find that there’s a whole set of very solid reasons why these things haven’t appeared in the literature – perhaps these reasonable reactions of mine have been tried in recent years, but found only to produce more of that gooey dark stuff in the bottom of the flask. We shall see. I’ve certainly made my share of that material.
But I doubt that all of them are in that category. So with any luck, soon I’ll be making something no one’s ever made, and finding things out about it that no one’s ever discovered. And as I said, I couldn’t be happier about that.
At many companies, this is performance review season. As I’ve written about before, this is a particularly hard thing to do right in a research organization. It’s so hard, actually, that never once have I heard of one where the scientists were satisfied with how people were being rated. I think it’s probably impossible for any organization, if you want to know the truth. It’s like trying to design a perfect voting system. No matter what happens, some people are going to feel, perhaps even with justification, as if they’ve been had.
But evaluating scientists is especially thankless. If you have a lot of really good ones, it’s a little like filling out yearly reports on poets. Hmmm. . . Mr. Larkin. I see you haven’t published anything so far this year, and still no collection since The Whitsun Weddings. . .wasn’t that on your goals statement for this period? I don’t really see how we can give you an “exceeds” rating given all that. And Mr. Lowell, it’s true that you produced a great number of sonnets during this review period, but I can’t help but believe that these were less of an effort than some of the work you’ve done for us before, and they certainly had less of an impact on our operations. No, I think that “meets expectations” is probably the correct category this year. . .And as for you, Mr. Housman, we need to ask ourselves just how long it has been since A Shropshire Lad. . .
Rating research productivity sends you into the same thickets. If someone hammered out a long list of analogs, but used pretty much the same chemistry to make each of them, how do you rate that compared to someone who had to hand-forge everything (and produced a correspondingly smaller pile)? How much should number of compounds count for, anyway – how about impact? What if the big bunch of compounds didn’t do much for the project, but one of the tough ones opened up a whole new area? (Or what if it was the reverse?) But isn’t that partly luck – what if the one that hit was totally unexpected, even by the person who made it? What if it became a great compound for reasons totally out of their hands?
And then you get to the people who aren’t necessarily cranking out analogs, the lab heads and such. They’re supposed to be leading projects, managing direct reports, coming up with ideas. How’d they do? How can you tell? Can you reliably distinguish a project that got lucky, or had a better starting point, from a well-managed one that has nonetheless been wandering around in the wilderness? Put your best people on, say, a protein-DNA interaction target, and pretty soon they won’t look so good, either.
No, even with the best rating system in the world, it would be hard to fill out the reports on drug discovery projects. And you can take it as given that no one is using the best rating system in the world. (Some may in fact be experimenting with the worst). The yearly frequency of ratings is one problem – anything tied to the calendar is a potential problem, since the compounds, the cells, and the rats never know what month it is. This has been a problem for a long, long time. I once quoted from Rayleigh’s biography of physicist J. J. Thomson. You wouldn’t want to run a whole department on the following system, but you don’t want to ignore the man’s point, either:
"If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion., in fact years may pass without any tangible results being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate, different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: you want this kind of research, but if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it."
And the insistence of many HR departments that the ratings fall on a normal distribution is another problem. Sure, if you hired a few thousand random people and turned them loose on the work, you could expect some sort of bell curve, assuming that you’ve solved that problem of fairly evaluating them. But you didn’t hire your people at random, did you? Everyone’s supposed to be at some level of competence right from the start. Some of those performance distribution curves are reflecting the randomness of research or the defects in rating it, rather than any underlying truths about performance.
There’s a saying that you see attributed to all sorts of old humorists, which goes something like “It’s not the things you don’t know that get you, it’s the things you know that just aren’t so”. (I always put it down to Kin Hubbard, but the best case can probably be made for Josh Billings). What you can’t argue about is the truth of the thing, and that truth gets demonstrated at all phases of a drug discovery project.
You see it all the time in the med-chem labs, that’s for sure. After a project has been going a while, a lot of people have had a crack at the SAR, and have made a lot of different compounds. Everyone has put their own facorite groups on, and things have been tried out on all the reasonably accessible parts of the structure. That’s when the myth-making starts – I’ve never been on a project where it didn’t.
“Trifluoromethyl in the 4-position’s a killer – I wouldn’t put anything electron-withdrawing there if I were you”. “You need the R stereochemistry at the benzylic site; those always work better than the S”. “Somebody tried to make the meta-substituted compound – it never worked.” “All the methyl compounds get cleared faster than the fluoros”. This sort of things will sound very familiar indeed to my drug-discovery readers. Anyone who joins a project that’s been going for a few months or more will get all the folk wisdom of this sort that they can stand.
But how much of it is real? In my experience, about half, and sometimes less. Many of these rules of thumb are born from only one or two examples, often as not from the earlier days of the project when other parts of the structure were different. It’s a rare project where you can mix and match with impunity, which means that these rules often outlive their validity. You really have to go back and check up on these things. And sometimes, disturbingly, there’s no foundation at all. This is a real danger in a long-running project with a lot of manpower changes and a long list of compounds. Once in a while you see everyone convinced of something that has no empirical support at all – it’s just something that “everyone knows”. Making compounds to put such superstitions to the test should be actively encouraged.
But depending on the culture of your company, or just your project team, that’s not always easy. Some project leaders ask for (or at least tolerate) a certain percentage of let’s-find-out compounds, which I think is healthy. But in other shops you have to brave well-meant ridicule or outright hostility when you send in analogs that challenge the accepted wisdom. As usual, it’s a question of the odds. If you make nothing but contrarian compounds, you’ll have a lower hit rate than the folks who are following up on the current leads. But if all you do is follow up on the current leads, never looking back or to either side, you’ll miss out on a lot of potentially useful things. Moderation in all things, the man said.
As you root through genomic sequences - and there are more and more of them to root through these days - you come across some stretches of DNA that hardly seem to vary at all. The hard-core "ultraconserved" parts, first identified in 2004, are absolutely identical between mice, rats, and humans. Our last common ancestor was rather a long time ago (I know, I know - everyone works with some people who seem to be exceptions, but bear with me), so these things are rather well-preserved.
Even important enzyme sequences vary a bit among the three species, so what could these pristine stretches (some of which are hundreds of base pairs long) be used for? The assumption, naturally, has been that whatever it is, it must be mighty important, but if we're going to be scientists, we can't just go around assuming that what we think must be right. A team at Lawrence Berkeley and the DOE put things to the test recently by identifying four of the ultraconserved elements that all seem to be located next to critical genes - and deleting them.
The knockout mice turned out to do something very surprising indeed. They were born normally, but then they grew up normally. When they reached adulthood, though, they were completely normal. Exhaustive biochemical and behavioral tests finally uncovered the truth: they're basically indistinguishable from the wild type. Hey, I told you it was surprising. This must have been the last thing that the researchers expected.
Reaction to these results has been a series of raised eyebrows and furrowed foreheads. Deleting any of the known genes near the ultraconserved sequences confirms that they, anyway, are as important as they're billed to be. And these genes show the usual level of difference that you see among the three species. So what's this unchanged, untouchable, but apparently disposable stuff in there with them?
No one knows. And it's a real puzzle, the answer to which is going to be tangled up with a lot of our basic ideas about genes and evolution. To a good first approximation, it's hard to see how (or why) something like this should be going on. So what, exactly, are we missing? Something important? And if so, what else have we missed, too?
I was reminded yesterday that today is the one-year anniversary of the day that we found out that the Wonder Drug Factory was being closed down. I remember that presentation rather well. I was one of the more optimistic ones, thinking until the last that we had about a 50/50 chance of the ax, but by the time the meeting began everyone had heard what was really coming.
Unpleasant, that was, and it did extend a cloud over the following holiday season. The job-searching period that followed wasn't anything I'm looking to relive, either, although my severance pay kept it from being anywhere near as bad as it could have been. And in the end, things worked out well. I thought they would, but as my wife pointed out to me at the time, I generally think that things will work out well, so that isn't as good an indicator as it might otherwise be.
But the whole thing was a useful reminder: no one's sitting back in a comfortable chair in this industry. You're riding a wild animal, instead. Working at a smaller company makes it easier to remember that, as many people here around the Boston/Cambridge area know, but there's no drug company so large or so profitable that it can make any guarantees to anyone. Patents expire, companies get taken over, drugs drop out of clinical trials or get pulled off the market.
But on the flip side, discoveries get made. Things make it through trials even though no one thought they might. New ideas get tried out, and given how little we know, just about anything has a chance of improving our lot in research. That's the thing about science: we don't have to be stuck where we are; we can invent doors and walk out of them into something new.
I just came across this article, provacatively titled "Dumber in English". What the author, Stefan Klein, really means is "Dumber In Your Second Language", and he's almost certainly right about that.
I know that when I was doing my post-doc in Germany, I was significantly less nimble in German. I didn't have much practice in the language, and that meant a lot of mental overhead while using it. I never became truly comfortable with it, although I did get better. The throbbing headaches stopped after a few weeks, for example, which was certainly a visible and welcome sign of improvement. After a I started to dream in the language (and once in a great while I still do, with progressively less impressive fluency). I knew that I was really learning the stuff when I dropped a piece of apfelkuchen into a mud puddle, and reflexively swore in German. (Not to fear, the cake was in a paper bag, and was recoverable with quick action).
Scientifically, I was working under a handicap, and I knew it. My secret weapon, though, was the way the chemical literature was (and is) largely written in English. But this is a particularly painful thing for Germans, since their language was once on top of the heap in chemistry, physics, and several other sciences as well. Reading Klein's description of a recent conference in his native country, you can feel it:
". . .All the speakers – six Germans, plus three from the United States and one from Great Britain – were outstanding. And they all spoke either English or, in the case of a German speaker, now and then something similar. Unusual word-choices and serpentine sentences can make a speech seem more brilliant than it actually is.
But who in the audience spoke English? No one. And even the four foreign guest speakers could easily have understood a lecture in German, because simultaneous translation was available over headsets that were readily on hand. As someone from the sponsoring foundation told me, of course it would be better if the local guests would simply speak German. This would increase the public resonance. But the professors had another idea. Their argument: People only take a conference seriously when English is the official language. . ."
He brings up the historical practice of scholarly Latin, and how this dissolved in the 16th and 17th centuries as thinkers began to write in the vernacular. (This, though, actually hindered the flow of information, as far as I can see - a lingua franca isn't such a bad thing). He also worries that science will come to be even more separated from the general run of the population in non-English speaking countries, but I'm not so sure. Most native English speakers don't have much of a connection with the subject, despite every linguistic advantage. There's also the problem of whether some languages will cease to develop their scientific vocabularies, preferring the English terms out of convenience. As far as I can tell, this is already happening - mind you, English borrows terms from the other languages as well, although not to the same extent.
Klein also brings out some examples of concepts that he feels come across better in their original German than in translation. but here I'm not so convinced. Einstein's complaint about "spukhafte Fernwirkung", to pick one, is generally rendered into English as "spooky action at a distance", which seems to me to get the concept across very well, as opposed to Klein's clunky "long distance ghostly effect". There are definitely things that don't translate well from German to English (and across any other language pair you can name), but this isn't one of them.
I don't see anything stopping the rise and dominance of English in the sciences (and to be sure, neither does Klein). I realize that I write from the perspective of a native English speaker, and having had to live in another tongue, I can sympathize with those who have to come to grips with the language. (Especially our ridiculous spelling, although I'll vote for that over German grammar any day of the week). To my mind, the advantages of being able to speak the same language, however roughly, outweigh the problems of a scientific tower of Babel.
OK, now that we’ve thought over the Hollywood analogy to drug discovery, what about other industries? And if none of them fit, what is it about the pharmaceutical world that makes us so different?
Wildcatting for oil has come up in the comments, and that’s a pretty good one. The ratio of dry holes to gushers is probably pretty similar, and using geology to figure out where to drill isn’t that much different than trying to figure out what screening hit to start a new drug program with. The lead time between discovering something and making money off of it (and the amount that has to be spent first) also lines up pretty closely.
One difference, though, is that all oil wells yield the same thing (oil!), while drug discovery comes up with all sorts of things. The variety of our products can make it hard to do good comparisons. We can find exactly what we’re looking for, sometimes, and still lose our shirts because no one turned out to want it (Exubera!) or because the competition got there first. By contrast, everyone wants oil. That also means that the competition is much more direct in the petroleum business than across pharma. Light sweet crude, once it’s on the tanker, might as well be from anywhere, and will trade wherever you can dock and pump.
It goes for fluctuating prices, to be sure, which isn’t something that we worry about day to day over here. Our prices follow a more discontinuous model – as high as we can make them during the lifetime of the patent, and then down to a mere fraction once it expires. Patents are the very definition of wasting assets, and that’s another difference that makes many of these analogies break down. Not as many other industries have big ticking Jame-Bond-villain-style clocks sticking to the sides of their moneymaking products, counting down the days until they lose most of their value. (Fashion and food are two that I can think of, and cars to some extent).
Finally, we have the regulatory aspect, and that really sinks a lot of industry-to-industry analogies, as many people pointed out in the comments to the Andy Grove post. Intel does not have to submit its new designs and its test data to the Federal Chip Administration for approval, and its chips, if they behave in unexpected ways, are still unlikely to directly sicken or kill their users. The closest analogs I can think of are the aircraft and auto industries, particularly the former, since trouble with FAA certification has wiped out many new plane designs and sometimes the associated companies as well.
So, imagine drilling for oil. . .but instead of oil, you’re looking for something a bit different each time you drill, often something that no one’s ever looked for before. And if you manage to find it, you have to make sure, as much as you can, that it doesn’t harm or even kill your customers, because you never know, and satisfy a very hard-edged government agency of that before you can go to market. And after a set number of years, you don’t own it any more.
So, if we’re not going to learn from the chip-making industries, who should we be learning from? That question came up in the comments to the Andy Grove polemic, and it’s worth thinking about. I’ve wondered in the past about which industry is the closest to pharmaceuticals in its risks and payoffs, and I think I have a candidate. You might not like it, though: it’s Hollywood.
Think it through. The match isn’t perfect, but it’s a lot better fit than the semiconductor industry. The movie business, just like the drug industry, incurs most of its costs in the R&D and marketing areas - production costs are comparatively minimal. (Piracy, naturally, is a problem under these conditions). Sequels to past successes are a somewhat lower-risk way to make money, but those aren't sure things, either
And for both groups of companies, figuring out what will be a hit is extremely hard, sometimes next to impossible (remember screenwriter William Goldman's maxim about Hollywood: "Nobody knows anything"). Companies try to live from blockbuster to blockbuster, banking enough money to find the next one.
The differences? Well, there are several, with the advantages mostly going to Hollywood. There's regulatory pressure, for one thing. The entry barrier to getting a movie distributed is a lot lower than getting a drug past the FDA. That reflects the relative differences between entertainment and medical care - the latter is clearly going to get a lot more serious scrutiny than the former. Another difference is that movies can continue making money for a much, much longer time than drugs can. Copyright just keeps on getting extended - roughly every time the early Disney characters start to come close to going into the public domain, by some odd coincidence - but no one's talking about similarly lengthening patent terms, are they? And movies continue on in other money-making forms after their theatrical run (DVDs and the like). For their part, drugs go generic, and while there's still plenty of money to be made, it's not as much as during their patent lifetimes, and not much of it is made by the original company.
On the other hand, the studios have probably managed to target just about every possible need of their audience at one time or another over the years, whereas we in the drug business have a lot of unmet medical needs waiting for us to do something about them. And our knowledge base (what to target, why, and how) is increasing with time, albeit slowly and jerkily, while the movie industry doesn't look to become a science any time soon.
The single biggest breakdown in the analogy are the salaries paid to the top stars, and their role in making a movie popular. I can't think of a clear correlate in the drug business. Even so, are there some lessons we might be able to learn from those guys? The way different studios have been set up, perhaps, or how they work out portfolios of releases or handle different sorts of production deals? Worth thinking about. . .
Update: In a clear great-minds-think-alike situation, this exact analogy was covered here earlier this year. And for a crack at the same analogy from 2005, check out The Stalwart here, who got the idea from James Surowiecki in the New Yorker.
So I see that Andy Grove, ex-Intel, is telling everyone that the drug industry could use some of that Moore's Law magic. I've noticed that people who spend a lot of time in the computer business often have an. . .interesting perspective on what constitutes progress in other fields, and we might as well appoint Grove the spokesman for their worldview:
Q: In what way does the semiconductor industry offer lessons to pharma?
A: I picked the semiconductor industry because it's the one I know; I spent 40 years in it, during which it became the foundation for all of electronics. It has done a bunch of unbelievable things, powering computers of increasing power and speed. But in the treatment of Parkinson's, we have gone from levodopa to levodopa. ALS [Lou Gehrig's disease] has no good treatment; Alzheimer's has none.
To me, the first sentence of that answer is the key one. As for the rest of it, hey, it's all true. Perhaps one explanation for the difference between the two fields is that they're driven by fundamentally different processes? Nah, that can't be right:
Q: Why is the speed of progress so different in semiconductor research and drug development?
A: The fundamental tenet that drives us all in the semiconductor industry is a deeply felt conviction that what matters is time to market, or time to money. But you never hear an executive from a pharmaceutical company say, "Before the end of the year I'm going to have xyz drug," the way Steve Jobs said the iPhone would be out on schedule. The heart of every high-tech executive has been, get the product into customers' hands and ramp up production. That drive is just not present in pharma; the drive to get sufficient understanding and go for it is missing.
Well. Where to begin? Let's start with a minor fact, and work our way up. I've been in this industry for eighteen years, and I cannot count the number of year-end goals I've had to deal with. Number of new targets identified, number of new projects started, number of compounds recommended for development, number of compounds progressed to Phase II, number taken to the FDA. It never ends. If Andy Grove hasn't heard a pharma executive talk about all the wonderful things that are going to be done by a given timeline, he needs to listen harder.
But here's the rough part: although drug company people talk like this, they're full of manure when they do. These year-end goals, in my experience, do very little good and in some cases do a fair amount of harm. I'll bet some of my readers have sat in a few meetings - I sure have - and looked up at the screen thinking "Why on earth are we recommending this drug to go on?", only to have the answer be "Because it's early November". More idiotic things may get done in the name of meeting year-end numerical goals than for any other reason in this industry, so thanks, but I'll try to ignore the recommendation to do them some more, but good and hard this time.
Mr. Grove, here's the short form: medical research is different than semiconductor research. It's harder. Ever seen one of those huge blow-ups of a chip's architecture? It's awe-inspiring, the amount of detail that's crammed into such a small space. And guess what - it's nothing, it's the instructions on the back of a shampoo bottle compared to the complexity of a living system.
That's partly because we didn't build them. Making the things from the ground up is a real advantage when it comes to understanding them, but we started studying life after it had a few billion years head start. What's more, Intel chips are (presumably) actively designed to be comprehensible and efficient, whereas living systems - sorry, Intelligent Design people - have been glued together by relentless random tinkering. Mr. Grove, you can print out the technical specs for your chips. We don't have them for cells.
And believe me, there are a lot more different types of cells than there are chips. Think of the untold number of different bacteria, all mutating and evolving while you look at them. Move on to all the so-called simple organisms, your roundworms and fruit flies, which have occupied generations of scientists and still not given up their biggest and most important mysteries. Keep on until you hit the lower mammals, the rats and mice that we run our efficacy and tox models in. Notice how many different kinds there are, and reflect on how much we really know about how they differ from each other and from us. Now you're ready for human patients, in all their huge, insane variety. Genetically we're a mighty hodgepodge, and when you add environment to that it's a wonder that any drug works at all.
Andy Grove has had prostate cancer, and now suffers from Parkinson's, so it's no wonder that he's taken aback at how poorly we understand each of those diseases - not to mention all the rest of them. But his experience in the technology world has warped his worldview. We are not suffering from a lack of urgency over here - talk to anyone who's working for a small company shoveling its cash into the furnace quarter by quarter, or for a large one watching its most lucrative patents inexorably melt away. And we don't suffer from a lack of hard-charging modern management techniques, that's for sure.
What we suffer from is working on some of the hardest scientific problems in the history of the species. Mr. Grove, the rest of your recommendations don't betray much familiarity with the industry, either, so there may be only one way to make you really understand this. If you really, really believe in your ideas, please: start your own company. You've got the seed money; you can raise plenty more just by waving your hand. Start your own small pharma, your own biotech. Hire a bunch of bright no-nonsense researchers and show us all how it's done. Tell them that you're going to have a drug for Parkinson's by the end of the year, if that's what you think is lacking. Prove me and the rest of the industry wrong.
I see that this blog is getting creamed in the Weblog Awards voting, which is similar to what happened last year. Pharyngula and Bad Astronomy are once again fighting it out for supremacy, this year joined by the fans of Climate Audit.
That last one not a blog I've read yet, since I regard most arguing about global warming to be as much religious as scientific. In my college years I largely lost my taste for arguing with people whose views were not susceptible to change, and too many people on both sides of that one fall into that category as far as I can see.
But the fierce arguing does lead to a lot of blog traffic, that's for sure - the same goes for a lot of the discussion on Pharyngula, as far as I can see. Disputes about sulfonamides and logP don't stir up the same passions, though, but if you're inclined, throw in a vote for this site to keep things from looking too disgraceful.
Sometimes I think that my chemical intuition is all haywire. Medicinal chemists, after they've seen several projects succeed and fail, accumulate a set of prejudices and opinions about what sorts of molecules are more likely to lead to good things (and what sorts are more likely to waste your time).
Many of these are uncontroversial: no one, for example, is going to tell you to load up your molecule with plenty of guanidines or acid chlorides. But there's a big middle ground where the arguing starts. Sulfonamides - like 'em or hate 'em? How about ureas? Tetrazoles as carboxylic acid isosteres? All of these groups are found in marketed drugs, but you can find experienced medicinal chemists whose noses wrinkle at them, because they feel that too many such compounds fail to make them worthwhile. Me, I don't like napthalenes, and I never put one on a drug candidate molecule. The next multibillion-dollar drug will probably have one, just wait.
But the reason I think my intuition is off is the molecule shown to the right (and thanks to KinasePro for bringing it to my attention. Where do I start? That screwy thiopyran ring? With its screwy thioketal? The multiple methyl esters, when I wouldn't even want to have one? Man, is that one ugly structure. But, as KP points out, the company that developed this shimmering vision is trying to sell it. That's right, they actually have the nerve to ask for money for this beast. So what's wrong with them? Or is there something wrong with me, because I'd never have that kind of gall, not even if I practiced for years. . .
I was interested to see a recent paper in Organic Letters on a class of compounds I'd never seen before: 1,2-dihydro-1,2-azaborines. There's the structure, in case that doesn't immediately call something to mind.
These things, which are isoelectronic with benzene, were made by the Liu group at Oregon. Their method (ring-closing metathesis) for making them seems superior to the rather sparse techniques that have been available up until now, and they've prepared a number of useful and interesting intermediates. They're rather stable - even the B-H compound with an N-ethyl group, the simplest in the paper, can be run down a silica gel column. An X-ray structure shows that the ring is indeed flat, and it seems to be aromatic and delocalized.
So. . .what I'd like to know is, who's going to be the first person wild-eyed enough to put this in a drug candidate structure? Boron has a bad reputation ("boron for morons", as they say), but hey, Millennium is out there making money with Velcade, a boronic acid. I have absolutely no idea what the fate of this heterocycle is in vivo, what its toxicity might be or what it gets metabolized to (if anything). And neither do you, nor does anyone. Let's find out!