About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
September 30, 2007
One of my readers, a PhD chemist, is thinking about a career change and is looking for some advice. Having been through a mass layoff earlier this year, I can sympathize. I did everything I could to avoid a career change of my own, and I'm very glad that I was able to. I like what I'm doing, and I hope I can keep doing it for a long time to come. But there are times that a change can’t be avoided, and there are times that it’s downright desirable. That said, the question is what works out well as an alternative career for a chemist? From watching colleagues of mine over the years, I can offer some of the traditional choices.
I’ve seen people move over into clinical development, for example. This is often best done inside your existing company, because changing companies and changing job responsibilities simultaneously isn’t easy. I haven’t felt much of a pull to the clinical side myself, but the attractions include getting to work further down the drug pipeline – that is, on compounds that have a much better chance of doing someone some good. And there’s still plenty of that what-happens-next research feeling, since development is just as much of a wild unknown as preclinical work is. Keep in mind, this is a job for someone with good organizational skills, because you’re going to have to pull a lot of stuff together and get it to work on time.
Another option is patent law. This one is going to require some recredentialing if you’re going to go all the way, of course, but I feel safe in saying that there is constant employment for good patent attorneys who know the technical end of their field. If you can be good at both the chemistry and legal ends of the job, you’ll do well. There’s a reason that not many people span that gap, though – the sort of temperaments that fit the respective fields are sometimes at odds. Chemists who have struggled their way through four-page generic claim structures often wonder how any sentient being can work full-time on such things, but there’s many a lawyer who feels the same way about basic research.
Scientific writing is another possibility. Sad to say, not all that many chemists can write well, so if you’re in the minority that can, your abilities could be worth leveraging. I should talk, since I’ve been rattling away on this blog for five years, and do some paid writing as a sideline. But I’ve never seriously thought about it as a full-time career. For one thing, I like doing the actual science too much. Another concern is that freelancing, your best chance at writing on the topics you feel like writing about, can take a while to get going, and can also be an uncertain existence at any time. There are a lot of science writers inside companies, though, who earn regular salaries. But that has its own compromises.
So there are a few common career changes that make use of chemistry experience. Any readers able to add more?
+ TrackBacks (0) | Category: How To Get a Pharma Job
September 27, 2007
Yet another study has shown no link between the former vaccine additive thimerosal and neurological problems in children. This one evaluated over a thousand seven-to-ten year olds for a long list of outcomes, and came up negative. No strong correlations were found, and the weak ones seemed to spread out evenly among positive and negative consequences.
This is just the kind of data that researchers are used to seeing. Most experiments don't work, and most attempts to find correlations come up empty. The leftovers are a pile of weak, unconvincing traces, all pointing in different directions while not reaching statistical significance. For a study like this one, though, this is a good answer. The question is "Does thimerosal exposure show any connection to any of these forty-two neurological symptoms?", and the answer is "No. Not as far as we can see, and we looked very hard indeed."
And this isn't the first study to find the same sorts of results. The fact that reports of autism do not appear to decrease after thimerosal is removed from circulation should be enough on the face of it, but there's the problem. To the committed believers, those data are flawed. And these latest data are flawed. All the data that do not confirm that thimerosal is a cause of autism are flawed. Now, if this latest study had shown the ghost of statistical significance, well, that would no doubt be different. But it didn't, and that means that there's something wrong with it.
The director of SafeMinds, a group of true thimerosal believers if ever there was, actually was on the consulting board of this latest study. But she withdrew her name from the final document. The CDC is conducting a large thimerosal-and-autism study whose results should come out next year. Here's a prediction for you: if that one fails to show a connection, and I have every expectation that it'll fail to show one, SafeMinds will not accept the results. Anyone care to bet against that?
As a scientist, I've had to take a lot of good, compelling ideas of mine and toss them into the trash when the data failed to support them. Not everything works, and not everything that looks as if it makes sense really does. It's getting to the point with the autism/thimerosal hypothesis- has, in fact, gotten to the point quite some time ago - that the data have failed to support it. If you disagree, and I know from my e-mail that some readers will, then ask yourself what data would suffice to make you abandon your belief? If you can't think of any, you have moved beyond medicine and beyond science, and I'll not follow you.
+ TrackBacks (1) | Category: Autism | The Central Nervous System | Toxicology
September 26, 2007
I had the opportunity the other day to take a look at the statistics for journal use from the library where I work. It’s the time of the year when they figure out which journals they need to subscribe to, as opposed to just paying per-document fees for individual papers.
That means that several factors go into the decision. The first is usage of the journal. If a lot of papers are downloaded from a given title, odds are that it’ll be cheaper to subscribe. Unless, of course, the subscription rate is completely exorbitant – but that’s certainly not unheard of in the academic publishing world, is it? So in those cases, you’d be better off paying per paper – unless the journal makes that so expensive that a subscription starts to look like a bargain. It’s a balancing act.
Several trends were apparent. The big-name big-impact journals are impossible to ignore, and if you’re a serious research site, they’re impossible not to subscribe to. You can’t have pretenses to keeping up with the latest results if you don’t have Nature, Science, Cell, and the like coming in. And you can’t ignore titles like the Journal of Biological Chemistry, either – sure, they publish eight zillion papers per year, but they get an awful lot of things that didn’t make it into those top-of-the-heap titles, and a lot of good stuff appears there.
In my particular field, the American Chemical Society journals come out pretty well. The subscriptions aren’t cheap, but they aren’t in the white-knuckle range of some of the more commercial publishers. And they get a lot of use – well, the main titles do, anyway. As for the other chemistry journals, Angewandte Chemie isn’t too cheap itself, but it’s also in the “unignorable” category. For a drug research shop, you can say the same thing about Bioorganic and Medicinal Chemistry Letters. There’s a lot of junk in there, but there’s also a lot of intelligence about what your competitors are up to – or were up to a while ago, anyway.
Who comes out looking bad? Well, I don’t know about other research sites, but our figures didn’t look good for either the “Expert Opinion” publications or the Bentham journals (“Current Whatever Whatever”). The latter had an especially large disconnect between the number of paper requests and the corresponding cost of a full subscription, which fits with my own experience. And yours?
+ TrackBacks (0) | Category: The Scientific Literature
September 25, 2007
An alert reader sends along this story from The Economist, on the price of talent in China versus the West. Talking about the steep rise in the stock of WuXi PharmaTech on the Chinese stock market, which is insane even by the impressive standards of the Chinese stock market, they point out that:
”. . .as in so many other industries in China, labour is cheap. Starting salaries for a PhD are $23,000 a year, compared with $200,000 a year in America, according to UBS, an investment bank.”
Well, that explains it! If that’s a real salary figure, I’m at a loss to explain where it came from, let me tell you. I’ve been doing this for 18 years now, and all I can say is that I’m driving down the average for what is supposed to be a starting salary? Something is seriously awry.
Real numbers are to be found, among other places, at the American Chemical Society. These are self-reported, of course, and surely have biases in them – but not all those biases point in the same direction, and if anything, they might lean a bit toward the high side. (People feel better answering surveys about their salary when it’s a number that they’re happy with). According to the most recent ACS numbers, entry-level PhD chemist salaries in industry were between $70,000 and $75,000 in 2003 and 2004. Unless something bizarre has happened since then, I think we can take that as a reasonable starting point.
So basically, the UBS figures are deranged, and if anyone there would like to tell me where they got them, I’d be obliged. But those ACS numbers still show a large cost difference between hiring a PhD in China and hiring one here, of course. And those numbers leave out a number of costs on the employer’s side, which just might make up a lot of the difference toward the UBS figure. I’m talking about benefits, retirement plan contributions, mandatory FICA and insurance payments, etc. I don’t know what the figures are for these costs in China, but I feel safe in assuming them to be much, much lower on both a currency-adjusted and percentage basis. (I realize that the UBS figure is billed as a salary, which isn’t supposed to include these costs – if this really is the explanation, then someone at the Economist was asleep at the keyboard).
The thing is, the costs in China are increasing. The increase no doubt looks gaudiest on a percentage basis, since it’s starting from a lower number, but the price of a PhD employee there is has been heading nowhere but up the last few years, if what I’ve been hearing is any guide. Supply and demand cannot be escaped merely by traveling to Shanghai. If the global research environment stays healthy, the trend will continue, which will lead to shifts into the less-globalized inland parts of China. (I already know of some good stuff from Chengdu, for example). And after that, it’ll lead to other countries entirely. Which is the whole idea.
+ TrackBacks (0) | Category: Business and Markets | How To Get a Pharma Job
September 24, 2007
It’s been a while since I talked about sirtuins, but the field has not been quiet. The latest data is a paper in Cell which set off a strong move in the stock of Sirtris, the company closely tied to the labs involved.
SIRT1 had already been the focus of a huge amount of attention in the aging/cancer field, but this paper seems to validate two other members of the family, SIRT3 and SIRT4. It’s mostly the story of NAD+, a very fundamental molecule indeed in cellular metabolism. There’s been some evidence (and a lot of speculation) that NAD+ levels are regulated quite differently in the mitochondria as opposed to the rest of the cell, but getting hard data on this pathway hasn’t been easy.
What is known is that apoptosis (programmed cell death) can depend on NAD+ levels. An enzyme called PARP-1 depletes NAD+ levels when it’s activated, and sets off a chain of events leading to apoptosis. Recently it was shown that there’s a PARP-1 fraction inside mitochondria, and given their central role in energy production, this gave room to wonder if an apoptosis signal could be set off from in there as well. On the flip side, NAD+ is synthesized (in mammals, anyway) though a pathway involving the enzyme Nampt. It’s also present in mitochondria, along with another NAD-pathway enzyme called Nnmat, so all the machinery is presumably there to up- and downregulate mitochondrial NAD+.
And so it does. The Cell paper looked at NAD+ levels inside mitrochondria for the first time, and found that they change greatly in response to nutrient levels. Fasted animals (and cells) greatly increased their mitrochondrial NAD+, which makes sense.
At first the authors were puzzled, when they found that although Nampt protected cells from genotoxic stress, it didn’t seem to affect how low the NAD+ levels in the cells went. Overexpression, underexpression – they all went down to the same low levels. It was only when the looked inside the mitochondria that they found where the NAD+ was being maintained.
So mitochondria can hold normal NAD+ levels even after they’ve fallen in other cell compartments. As far as the authors can tell, this is because of local synthesis, although it’s possible that the mitochondria also import all that they can get their hands on under such conditions. But the fact that levels of mitochondrial Nampt also rise along with the NAD+ argues for biosynthesis.
And the protective effects of all this NAD+ work through SIRT3 and SIRT4. Their activity is limited by the amount of NAD+ around, so it makes sense that they get more active under stress, when NAD+ levels are up. siRNA knockdowns of all seven sirtuins showed that only the 3 and 4 subtypes – which are localized in the mitochondria – are the players.
All this makes Nampt look like the yeast gene called PNC-1, which is on the yeast and roundworm pathway to make NAD+. PNC-1 has been shown to be involved in extending lifespan in such creatures, so if the human homolog has been found, the immediate question is whether it has the same effects. Its changes in fasting rats suggest a link with the caloric restriction route to lifespan extension. Overall, you have to think that if we’re not onto the relevant pathways, we’re very close indeed.
Thus the spike in Sirtris stock. It came back down as various analysts make cautious noises today, but until the company gets some Phase II data, publications like this one will be what moves things around. If you’re interested in a wild and speculative ride, they’re worth a look. Don’t expect a dull time, though – there’s an awful lot about this stuff that we don’t know.
+ TrackBacks (0) | Category: Aging and Lifespan
Enough time has passed so that I can talk about one of the more puzzlingly boneheaded decisions I’ve seen in the drug industry. Some years ago, the company I worked for decided to try out a new salary-based incentive plan. Nothing particularly unusual about that – the existing system was pretty generic, with the usual performance ratings, salary bands, and so on. (Finding out what salaries were tied to what levels, and which raises were tried to which ratings was difficult, but there’s nothing unusual about that, either).
But this new plan, rolled out first to the people in the Clinical division, was definitely something new. Here’s how it worked: your pay got cut 25%.
Well, that was an attention-getter, eh? But that’s how it worked. Everyone’s base salary was reduced, but (and here’s the good part), you could earn your way back to what you used to make by. . .meeting the goals that you’d outlined in your beginning-of-the-year Research Goals Statement! Hey, as the HR people pointed out excitedly in the rollout presentations, with this plan you could even earn more than your base salary if you exceeded the goals – what could be better?
You can probably guess the consternation with which this was greeted. It was immediately noticed, as in within the first few seconds, that such things as the profit-sharing plan payout were based on a per cent of base salary. That was going to get cut no matter what. But there was another (non-mathematical) problem with this brainstorm: the research goals statements that it all depended on were, as everyone knew, worthless.
How could they not be? How are you supposed to write down what you’re going to discover and what you’re going to do about it? These folks didn’t want broad general statements this time; they wanted specific, quantifiable goals that could be used to decide just how much you’d be paid. I remember arguing with someone from HR during an “informational meeting” about all this. I told her that if I knew what I was going to be doing in six months, it wouldn’t be research, would it? And I told her that no matter what the org chart said, my real bosses were a bunch of mice in cages and cells in a dish, and they didn’t know what the corporate goals were and they couldn’t be “Coached For Success”, the way that poster on the wall said.
This did no good. I got the impression that she thought that I was either making a joke, misinformed, lying to her about all this, or just rather slow in the head. At any rate, it turned out not to be a problem for me, or for anyone in research. As I mentioned above, this plan was first applied, on a test basis, to the people over in the Clinical department, and within two or three weeks several of the best people over there had found new jobs and hit the road. As, of course, anyone should have been able to anticipate.
There’s an awful lot of job mobility in the drug business. Everywhere you go you work alongside people who’ve worked somewhere else, and every year there’s some migration in and out. This salary plan might had worked out had our company been located on a remote tropical island, but even then people would have been chopping down palm trees and building rafts. Located where we were, with plenty of other companies around, it had no chance.
A hard-copy memo came out early one morning to the Clinical people, and it spread rapidly throughout the site. It was poorly formatted and grammatically incoherent, and once decoded it stated that the proposed salary plan would not be implemented and that no further ideas of that sort were coming. Some people suspected a poorly executed hoax, but to me the memo had all the signs of being authentic (which it was). It read exactly like something a VP-level person might compose, without the aid of his secretary, while his immediate superior stood behind him with a raised golf club. And so things returned to what passed for normal.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
September 20, 2007
When I was talking about Steve Ley of Cambridge the other week, one of his research areas that I mentioned liking was his work on flow chemistry. This is the benchtop application of a type of reaction that’s been done more often on large industrial scales.
Most of the work that medicinal chemists like me do is batch by batch. We weigh and syringe things into flasks, cool, heat, and stir them, and then pour the resulting stuff out of the flask and clean it up. There are all sorts of techniques that have come along to speed these steps up or to allow you to do more of them simultaneously, but all of them are still in “batch mode”.
Flow chemistry is a bit different. The starting materials flow through an apparatus that (one way or another) causes them to react, and then out the other side. The business section of the machine can be a part that heats up the solutions as they go by, or puts them under high pressure, or forces them over a solid support that contains some catalyst. That last category is especially useful, since the number of metal-catalyzed reactions is increasing with no end in sight.
If the reaction isn’t done, you can send the mixture back through for another pass. If the reaction’s complete, you can (ideally) take the resulting solution on to the next step without necessarily having to clean it up – after all, the catalyst is staying behind on the solid support. If you treat it right, the catalyst should be reusable for quite a while as well.
One of the more widely adopted flow reactors so far has been the “H-cube”. Its makers chose a reaction (hydrogenation) which is very useful, but one that a lot of chemists don’t like to run. The opportunity to easily try out catalysts and conditions that aren’t normally run has been another selling point. Now the company has come out with their X-cube, which is a more general flow reactor.
My question is: has anyone out there used this beast or its competition? I’ve had a little (generally positive) experience with the H-Cube, but none with any other flow reactor. There are a lot of homebrew setups out there, but the commercial space has been filling up recently, too.
Of course, as everyone knows, neat-looking equipment can end up gathering dust. For these flow gizmos to be useful, they’ll have to do things that aren’t easy to do in a flask, and do the flask reactions in a more convenient manner. The flow reactor people aren’t competing with each other as much as they’re competing with a drawer full of round-bottom flasks. I’d be interested to hear from anyone who’s put that comparison to a real-world test. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
September 19, 2007
For a good long time now, a massive piece of patent legislation has been working its way through Congress. It's cleared the House and is on its way to the Senate, so the number of twists and turns it can take is still substantial. And there's no telling if the President will sign it, since the administration has expressed its worries about the bill as it stands. In its current form, this law would change things around quite a bit.
For one thing, it would finally make the US a first-to-file country, like basically everywhere else in the world. The first-to-invent regime is one of the reasons that chemists throughout the country are harangued to get their lab notebooks witnessed promptly, because if it came down to a notebook-and-calendar fight, the company with the earlier witness date would likely win. First-to-file eliminates that particular worry (while not obviating the need for witness signatures), but could replace them with others. You hear a lot about how this will benefit larger players at the expense of smaller ones, since it makes the trouble and expense of filing a patent the determining factor. I think that this is exaggerated a bit, though. The trouble and expense of proving that you were the first to invent is pretty significant, too. (Admittedly, some of the parts of this bill look to make filing a patent even more expensive than it is now).
But there are many other provisions in this bill, ones which have managed to split the high-tech part of the US economy into camps.. Software companies are mostly lining up for the new legislation, while biotech and pharma are coming down against it. The arguing ground is a set of new rules about how easily patents can be obtained, and how easily they can be challenged after they’re granted.
In short, the computer sector feels victimized by people who get some useful step or technology patented and camp out on it, shaking everyone down for fees. (The real problem, as far as I can see, is that patent quality is just awful in this area, and all kinds of junk gets granted). At any rate, software and hardware companies would like to see fewer such things get patented, and are looking forward to some new tools to get them invalidated in a more timely fashion.
But over here in the drug industry, we’re jumpy about that sort of thing. Many companies in this area feel that their patents are being challenged enough already, thanks very much, and would rather not give the generic companies more new tools to tie things up in court. I think that the overall quality of patents is much higher in the pharma business, which helps to explain the difference of opinion. We generally have fewer, tougher patents protecting our important stuff over here, as compared to more (and weaker) ones in the software world.
It’s not so simple a breakdown, though. The flow can reverse in either industry. We do have some cases of smaller outfits getting some IP that they try to beat everyone up with – Ariad’s NF-kB patent, which I haven’t written about in a while, is a good example. And it’s not like the computer giants don’t ever get their patents challenged, either. Both industries are playing the percentages – this change won’t suddenly remake the whole landscape for either of them.
Overall, even though I’m a drug-industry guy, I think I come down on the side of making patent challenges a bit easier. After all, most of our patents in the industry stand up anyway – we shouldn’t have as much to fear. And I think that it’s easier to do harm by rent-seeking on a patent of dubious validity than it is to do harm by uselessly challenging an existing one. When you get down to it, the argument is about who you fear more: the patent office, for allowing junk to issue, or the courts, for making incorrect rulings when they’re challenged.
It’s a tough call, but I think the patent office is a bit more disgraceful. Frankly, I think if so many bad patents weren’t being granted, we wouldn’t be having this discussion at all. But before I sound like I’m beating up on the PTO, I should note that their funding and staffing has not come close to keeping up with their duties over the years. There are a lot of decent examiners there working under ridiculous conditions, so it’s not surprising that we find ourselves in the shape we’re in.
For more on this bill, I refer you to excellent posts (for example, here and here) at PatentBaristas and at Patently Obvious – those guys are lawyers, while I’m merely a client.
+ TrackBacks (0) | Category: Patents and IP
Yesterday's post set off a discussion of the 1990s combichem boom in the comments. I joined the industry before that took off, and watched it with interest.
For those outside the field, combinatorial chemistry was (is, I guess) the semi-automated generation of large numbers of diverse organic compounds. The basic idea was that you'd start with, say, building block A, which would react with a big library of reactant partners B1. . .Bzillion. The resulting compounds would then be reacted with another big set of coupling partners, C1. . .Cmonstrous, and this might be designed to take place out at the end of the B part, or on another part of the A region, etc. There were many pool-split-mix methods worked out to generate the maximum number of different compounds. Various strategies generated either individual compounds or mixtures of different ones and all sorts of techniques were developed to make it all happen in a less labor-intensive fashion. These included bonding the starting materials (or the reagents) onto solid resin bead supports until the end of the synthesis, the better to move things around, along with ingenious schemes for tagging and identifying what was ending up in which vials.
The idea was that we'd generate lots (lots) more compounds for random screening than we'd ever have before. And for a while, it looked like the companies that did this the first with the most were going to have the drop on everyone else. It stood to reason - many of our high-throughput screens didn't generate anything useful to start working on, so if technology now allowed you to brute-force your way into getting things to hit, well, you'd be crazy not to.
A frenzy ensued. People that no one had heard of were suddenly in demand as consultants and invited speakers at conferences. Whole companies were started to make and sell combinatorial libraries of compounds - a couple of them are even still in business, although the road has been pretty jumpy. Larger companies started in-house efforts, some of them rather lavish. Some people talked about traditional medicinal chemistry receding to a specialty, as the mighty compound factories came on line (more than one person tried to sell me on this idea personally).
But as time went on, and the piles of combichem stuff made it into the screening collections, people began to note with unease that, well, not so many lead compounds were coming out. In fact, it eventually became clear that the hit rate for most combichem stuff was lower than for the general old-fashioned screening collections. That went double for the combi libraries from the first part of the boom, many of which are now regarded as basically worthless.
What happened? Well, the techniques that generated larger mixtures of compounds were trouble from the start, because it's hard enough to screen individual compounds well. But even single-compound collections had their problems. A larger difficulty was that the chemistry that could be used under the more highly automated combichem protocols was limited. Many useful reactions were bypassed because there was no good way to do them on solid supports with minimal purification afterwards. There sure were an awful lot of amides, ureas, and sulfonamides produced, I can tell you. Not that there's anything wrong with these groups, but when you start to have multiple instances of them in the same molecule, you can veer off into undesirable territory.
Overall, as has been realized, the chemical diversity offered by combichem's early years was largely spurious. People went out and did the stuff that was easiest to do, with what was on hand, and that translated to a much spottier coverage of chemical space than was first realized. Combichem itself survives, but compared to the mid-1990s it's a backwater.
But there's still a place for it. People have been steadily introducing a greater variety of chemistry into it, and everyone's now more aware of how hard it is to make truly diverse compound collections. Once the hot air hissed out of it, combichem was revealed as what it really had been all along: a tool. One of many, to be used as appropriate.
Update: Here's a take on the field from the inside, from Org Prep Daily.
+ TrackBacks (0) | Category: Drug Industry History
September 18, 2007
I also mentioned recently that I’d come across a good example of an academic compound with interesting activity but no chance of being a drug. Try this one out, from Organic Letters. Yes, there aren’t many other compounds that do what this one does (inhibit the production of TNF-alpha). And no, it’s not going to be a drug – well, at least the odds are very, very long against it.
Why so negative? Several reasons. For one thing, this molecule is extremely greasy. This is not a killer in and of itself, but it’s inviting trouble, for the reasons noted here. The second problem is that this thing looks like it’s going to have some trouble dissolving. That’s trouble both from both the thermodynamic (eventual amount in solution) and kinetic (speed of dissolution) senses. That greasiness will be the problem with the former, since a lot of this molecule’s surface area gives water molecules no incentives to join in on anything. And all those aryl rings (along with the symmetric structure) are asking for trouble with the latter. Those features make the structure look like it’ll form a very good, very happy crystal, with its aromatic rings stacked onto each other like ornamental bricks. “Brick” is the very word that comes to mind, actually.
But solubility is only the beginning. The real problem is that catechol functionality in the center of the molecule, which is just waiting to turn into a quinone. In medicinal chemistry, no one wants quinones; no one likes them. They’re just too reactive. It would not surprise me for a minute to learn that this group, though, is the reason for the compound’s activity. It’s probably reacting with some functional group on the surface of the target protein and gumming up the works that way. It’ll do that to others, too, if it gets the chance. There are all sorts of weird little quinones in the literature that hit proteins that nothing else will touch, but none of them are going anywhere.
No, it’s safe to say that any experienced drug-company chemist would draw a red X through this one on sight. Plenty of reasonable-looking compounds turn up with unanticipated problems, so we don’t need to go looking for trouble. That’s not to say that it can’t be a research tool (although I’d be careful interpreting the data from complex systems – there’s no telling how many other things that quinone is going to react with).
But all this brings up another thing that we were talking about around here – how much do drug companies owe academia for working out fundamental biochemistry and molecular biology? What if someone uses this very compound, for example, as a research tool and discovers something about its target that could be used to develop an actual drug? What do we call that?
Well, we call that “science”, as far as I can see. Everything is built on top of something else. In a case like this, the discoverers of this current compound, even if they’ve patented it, do not have a claim on what discoveries might come from it later on. An even stronger case was decided in that direction – the University of Rochester’s discovery of the COX-2 enzyme, the patent for which led to their attempt to claim revenue from Celebrex. The judge ruled, absolutely correctly in my opinion, that the discovery of a drug target is not the discovery of a drug, and that the effort and inventiveness needed for that second step is more than enough for it to stand on its own.
There’s a “research exemption” for patents, giving legal room to use the disclosed inventions and compounds to make further inventions. I think that’s an extremely important concept. It lets academic labs study patented industrial compounds for their own purposes, and it even lets companies do that to each other. How would we compare our internal compounds to the competing ones if we couldn’t use them? (There’s more than one research exemption, though, and the traditional common-law one took a big hit a few years ago in Madey v. Duke, which worries me).
I strongly oppose broad patent claims for uses and pathways, because I think that these cut into legitimate research. Patents should cover things that are novel and useful. They should completely disclose the substance of their invention. And in return for the period of exclusive rights, anyone else who wants to should be able to get to work on what will replace them. A patent is not a license to kick back; it’s a reminder to keep moving.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Patents and IP
September 17, 2007
As I was mentioning the other day, the latest issue of Nature Medicine has the details on a story that doesn’t, on the face of it, do the industry any credit. About twenty years ago, there were reports out of China that a solublized form of arsenic was very effective in treating acute promyelocytic leukemia, a rare (and fatal) form of the disease. Arsenic had been used as a folk remedy for such conditions, as it has been for many others (often with much less justification!), but its most common compounds (like arsenic trioxide) are tremendously insoluble. The Chinese authors had found a way to make that one go into solution where it could be dosed, but didn’t disclose it in their publication.
That left the door open to someone else, namely a small company called PolaRx. They found a way to do the same thing with the oxide (as far as anyone can tell), and got a patent on its use in oncology. Over years, mergers, and reshuffles, the patent finally ended up in the hands of Cephalon, who now market the soluble arsenic trioxide. However, a course of treatment costs about $50,000, which means that for many patients around the world, the drug is totally out of reach.
Even across the entire world, there aren’t that many patients for this therapy, so the price would tend to be high no matter what. It’s worth remembering that production costs are not a major factor in the pricing of most drugs. We’re not indifferent in this business to how much it costs us to make something, far from it, but we try to keep that a small part of the price. So what does set the price? What sets the price is what sets most prices in this world: what the market will bear. A drug that only treats a small number of patients every year is going to cost a lot of money, no matter what it’s made out of. A company will not market a compound unless they can use its profits to help defray the costs of all the things that don’t make it to market at all.
Cephalon is charging what their market will bear, which is their right, but their market is the health insurance organizations of the industrialized world. That’s another thing to remember – drug companies aren’t selling direct to patients most of the time. They’re selling to insurance companies, and first-world health insurance will put up with a lot of things that no one else can or will. There’s a lot of room to talk (and to complain) about this (I think it distorts pricing signals something fierce), but all the complaints have to start with the realization that this is how things are now set up. Cephalon, for its part, says that it’s open to compassionate use of its drug – that is, providing it to people in need who absolutely cannot afford it. With any luck articles like the Nature Medicine one will help to get the word out about that, and we’ll see how well they follow through.
It’s tempting to blame the patent system for this whole situation – after all, the only reason the company can charge these prices is that they’re the only ones who can sell it, right? But perversely, this might actually show the need for more use of patents rather than less. As another piece in Nature has helpfully reminded people, patents not only grant a period of exclusivity. In return for that, you have to tell people how to replicate your invention.
The alternative, in countries that don’t follow this system, is usually secrecy, and I can’t help but think that this is why the original Chinese work didn’t disclose all the details. A strong patent system eliminates a lot of trade-secret grey areas: someone owns a discovery (for a predetermined period of time), no one owns it, or everyone owns it. There’s none of this “someone owns it until someone else finds out about it” stuff.
But my guess is that the Chinese lab, being used to a trade-secret (or government-secret) culture, reflexively held back their important details. If they wanted to make sure that no one could patent anything, they would have (or at least should have) put all the information out into the public domain, where it would have been prior art against anyone attempting to file on it. (But see below - would that have helped get it through clinical trials, or not?) It’s worth noting that if a patent had been filed back in the early 1990s, the drug would not only have come to the world’s markets faster, the patent would also be much closer to expiration by now, opening up its production. The US researcher who formed PolaRx and filed the patent, Raymond Warrell (now chairman of Genta), stands up for it in the Nature Medicine article, and like it or not, he has a point, too, saying that the patent stimulated interest in the compound: "Without the patent, it would have remained a curious Chinese drug, not available to anyone else." I should note that there may well be room to argue about the validity of the patent, from prior-art concerns, but no one (as far as I know) has seen fit to challenge it.
But I can say for sure that without intellectual property protection in the US and Europe, no drug company would have touched the compound. Without industrial input, the drug would have either never reached the market at all (arsenic trials were a hard sell at the FDA), or would have likely come on more slowly. (That ticking patent clock does keep an organization moving, I can tell you). And now its success in the market has other companies working on improved versions of the therapy. This is how our world works, and (for better or worse) there's no requirement that it be aesthetically appealing.
+ TrackBacks (0) | Category: Cancer | Drug Development | Odd Elements in Drugs | Patents and IP | Why Everyone Loves Us
September 13, 2007
There are many mistakes you can make in medicinal chemistry. Hah, I got that sentence typed out with a straight face; I wasn’t sure if I could do it or not. Mistakes! We’re up to our clavicles in them. Successful R&D is the triumph of those who manage to bungle things the least, and that doesn’t go just for the drug industry. Talk to engineers, talk to software developers. You’ll get the same perspective, accompanied by much eye-rolling and waving of arms.
And getting used to this, as I’ve noted here and there, is a psychological adjustment that a working scientist has to make. Setting your standards to a no-false-starts no-blind-alleys standard guarantees your failure, or at least ensures that you’ll be driven out of the field before have time for any success. Every working chemist knows what it’s like to put a slide of reactions together for a presentation, only to realize that they’ve just summed up months of effort in what could (theoretically, ideally) have been a few day’s work.
In med-chem, I can think of many examples where I’ve worked on a project only to recommend a compound at the end that was embarrassingly close to the starting point. Twice in a row we ended up with a compound that had one methyl group added to it compared to one of the starting compounds – mind you, those methyl groups really pulled their weight. They made a big difference in the final properties of the molecule, but we’d spent a lot of time exploring bigger changes and other regions of the molecule, none of which worked out well.
Philip Larkin, a favorite poet of mine, said that he learned from Thomas Hardy's work not to be afraid of the obvious. Like a lot of good advice, though, that’s hard to take. Researchers with an optimistic bent will wander off to new parts of the lead molecule, looking for the greener grass that they’re sure is out there. And the pessimistic ones won’t do the stuff right in front of them, either, for fear of how it’ll look. Sometimes the simple stuff gets overlooked, for no other reason that it's simple. Should that count against it?
+ TrackBacks (0) | Category: Drug Development | Life in the Drug Labs
September 12, 2007
The mention of tropical diseases here the other day turns out to be timely, since the latest Nature has several articles on various ways for industry and academia to partner on attacking these. Some adjustments are needed every time you try this sort of thing, naturally. I particularly enjoyed this article. Here’s a sample:
“. . .translational research requires skills and a culture that universities typically lack, says Victoria Hale, chief executive of the non-profit drug company the Institute for OneWorld Health in San Francisco, California, which is developing drugs for visceral leishmaniasis, malaria and Chagas' disease. Academic institutions are often naive about what it takes to develop a drug, she says, and much basic research is therefore unusable. That's because few universities are willing to support the medicinal chemistry research needed to verify from the outset that a compound will not be a dead end in terms of drug development.
Academics will currently publish, say, a chemical scaffold, which they bill as a potential new target for parasites. "But had a medicinal chemist looked at it, he might immediately see that it will never work as a drug, because it has an inappropriate solubility or toxicological profile," says Els Torreele, a product manager at the DNDi. "Having a chemical structure that kills your parasite is only one of many aspects of what makes a drug a drug”.
Ted Bianco, director of technology transfer at the Wellcome Trust in London, agrees. "It's fine if a researcher is just using a compound as a ligand to probe a biological process," he says, "but don't kid yourself it's a drug unless you ask whether it has druggable properties." What's needed, says Hale, is a 'target product profile', which sets out the appropriate drug chemistry properties. "Getting a drug through regulatory processes is not just about how good your science is and how great your trials are; it is much more complex," says Hale. "And academics don't have the experience — they need to hire people from the drug industry."
This would make particularly interesting reading for the NIH-funding-discovers-all-the-new-drugs crowd. That idea seems pretty indestructible, although you’d think it would at least be dented by talking to the people who actually try to develop drugs (like me, or many readers of this blog), or to the people who are actually partnering with academia (see above).
I first came across this whole debate a few years ago, not having even realized that it was a debate at all. Even now, when I tell co-workers in the industry that there are people who believe that pretty much all drugs come right out of from publicly funded research, the usual result is an incredulous stare and a burst of laughter. That’s often followed by a question like “So what is it that I’m doing all day, then?”
Unfortunately, there really are occasional examples of companies scooping things up and making a killing on them – an example will follow in a coming blog post. And on the flip side, I have a recent example coming up of an academic compound which may well do exciting things in a dish, but has as much chance of becoming a drug as I do of becoming an Olympic pole-vault champion. And it’s not that I’m not reasonably aerodynamic – it’s just that there’s more to the pole vault than that, and there’s more to making a drug than working in vitro.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development
September 11, 2007
For my scientifically employed readers, here’s something my labs don’t have, and I'll bet yours don't either: windows that open. I’ve only been in a couple of chemistry labs that did.
My undergraduate chemistry building (since renovated) had had its windows concreted over in the 1960s. That was bearable most of the time, but the summer I did undergraduate work there, the air conditioned kacked out on us a few times. This was troublesome. You don’t want to be on the fourth floor of a building with no windows in Arkansas in the summertime. Ether in that era was still sold in the round metal cans with the soft alloy caps that you sliced off, and then put a plastic snap-cap over. I remember the poonk-poonk sound of those ether caps blowing off as the temperature rose, which we took as a good substitute for a quitting-time whistle.
My graduate work was windowless as well. It was done in a building where all the lab space was on the inside, so you had to leave the bench and head down the hallway if you wanted to find one of the narrow little window slits at all. It was easy to lose track of time in there, which was probably a design feature (just as in a casino’s gambling floor).
But when I went to Germany to do my post-doc, I had several adjustments to make, among them a lab whose windows not only opened, but needed to be. Like many German buildings, this one wasn’t air-conditioned, so in the summertime you needed to get a breeze going. It was a real novelty to see the wind ruffling the pages of my lab notebook, that’s for sure. I always wondered about how this affected the air balance of the fume hoods, but since they didn’t work that well to start with, it may not have been a concern.
And since then, I’ve yet to see an industrial lab with operable windows, other than my very first one. And even those were almost never used. For one thing, the building had air conditioning, since New Jersey is definitely more tropical than Central Europe. But another reason was that our lab faced directly out onto a major highway, so the only thing you’d get by opening the windows would be exhaust fumes, traffic noise, and (in the summertime) the occasional curse and honk of a horn. I did see my labmate make use of his window at one point, though, after he’d spilled some ethanethiol on his shirt. He tried hanging it out the window to air out. This was unsuccessful, of course, but it says a lot about ethanethiol that it makes you consider hanging your laundry out over the Garden State Parkway to freshen it up.
+ TrackBacks (0) | Category: Life in the Drug Labs
September 10, 2007
After mentioning orphan drugs yesterday, I note that Kyle at The Chemblog has remarked on Genzyme's pricing for Cerezyme. It is indeed spectacularly expensive, which is a matter over which Genzyme and the various insurance companies involved have had, I'm sure, many a spirited discussion.
As it happens, Genzyme was in the news today (front page of the Boston Globe), with the kind of story that no one at a company likes to see. It's all about a shareholder lawsuit, just recently settled, which maintains that Genzyme bought back shares of a tracking stock (for their biosurgery division) at a deliberately depressed price. The existing shareholders were deeply unhappy about the deal they were offered, but basically had to roll over and take it (until they went to court, anyway).
There are some lurid figures in the story. Genzyme bought the shares back at $1.77. Meanwhile, a price of $75 per share had been mentioned just days before, which sounds like piracy, for sure. But when you read the story closely, you find that this figure was mooted by the head of the biosurgery division, who actually quoted a range of 12.75 to 75. Three things come to mind: (1) those sure are some error bars, (2) any estimate with that kind of spread to it is a worthless one, and (3) this figure is from a person who had an incentive to make his division (and his leadership of it) look as good as possible.
At any rate, the buyback cost $72 million at 1.77, and adding in the settlement makes the actual price per share about 3.34. If there were a remote chance that that 12.75 price could have been enforced in court, the lawyers involved wouldn't have gone for it, I think. (Much less that 75-dollar figure).
That said, it was in Genzyme's interest to buy back the shares at the lowest possible price. If I'd been a shareholder of the tracking stock, I wouldn't have been happy - after all, 1.77 was below the market price at the time (about $2.50). But the company had the right to do what it did, and their actions could have been anticipated by someone who took the trouble to read the fine print. I also have no doubt that Genzyme tried, as much as possible, to keep that price low so they could get the best deal, but whether they broke the law while doing so I have no idea.
Personally, I wouldn't want to take the other side of any equity deal with Genzyme. Henri Termeer, their CEO, is a wily sort and generally doesn't offer bargains to anyone. This article only confirms that opinion.
+ TrackBacks (0) | Category: Business and Markets
September 9, 2007
When a drug company starts off a new project, a lot of things go into the decision. Most of them are scientific decisions, but a big one that isn't is the projected market size. It's a business, and if you keep developing things that don't earn out their costs (and plenty more), you won't be part of the business for long.
These market numbers aren't the most reliable in the world - Pfizer, for example, appears to have been surprised by how well Viagra did, and Bayer and Lilly were likewise surprised that their follow-ups didn't repeat. For a more recent example, try Pfizer's Exubera. Its potential as a big winner was already much eroded by the time it finally made it to market, but surely it's selling even below their worst projections.
But underserved markets give you something you can depend on. A safe, effective anti-obesity drug would clearly reap billions - not that I'm expecting to see one. An effective HDL-raising therapy would do the same in the cardiovascular market (but hold on tight if you're trying to develop one of those, too). And CNS is full of opportunities, like Alzheimer's. Mind you, those opportunities are there because people keep trying and failing to do much for the diseases, but there's definitely a fortune waiting for the first thing that does.
As you can see, the risk-reward curve is pretty similar to what you see in finance. If you want the big returns, you have to take the big risks. "Big risk" is a relative term around here, though, since even the plainest of vanilla rip-off me-toos can implode on you, taking all its costs with it. But in general, it's the same no-free-lunch graph as everywhere else in the world.
There are some exceptions, but the problem (as always) is that it's usually impossible to see them coming. Lipitor is the first example that comes to mind - Warner-Lambert just about killed it because it was going to be the umpteenth statin, and they didn't think its market share would justify the development costs. (I should have mentioned that one back in the first paragraph, when I was talking about shaky market projections!) It was only after the drug got well into the clinic that its potential began to show itself, just as Exubera was far along before its deficiencies became clear.
On a macro level, one of the big problems is the disconnect between underserved markets and underserved populations. Tropical diseases like malaria are an instant example. An effective antimalarial would be taken by huge numbers of people, but many of them still couldn't begin to afford the cheapest pharmaceuticals in the world, which is a real dilemma. (Of course, there's also the possibility that the sudden introduction of such a drug might help precipitate a Malthusian crisis in countries with traditionally high death rates, but better to deal with that than have the current situation, I'd say).
There are several methods that have been tried to bring things in line. The Orphan Drug Act is an example from inside the US (making diseases with smaller numbers of patients more financially attractive), and there's perennial talk of something similar for tropical diseases through prizes and other incentives. A different world would do things still differently, but we don't, to the best of my ability to see, live in one.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices
September 6, 2007
I was talking about reactive compounds the other day, but I should note that some of the reactive ones can still linger around in a peculiar manner. Acid chlorides are a good example, from both carboxylic and sulfonic acids. They’re reactive, all right – just pitch one into a bunch of amine and find out. So you’d think that if you spilled some, that their admittedly nasty aromas would be a problem that solves itself, right? They won’t last long outside the bottle; they’ll react with water and such in the air and stop stinking the place up – right?
Wrong. Some of these guys can hang around for abominable lengths of time if you don’t actively clean them up. The problem is, I think, that while they do react with water, it’s only a fast reaction under stirring conditions. In the bulk phase, the liquid acid chlorides tend to be rather thick and oily. My guess is that the outer layer does react with water (at its own pace), but that diffusion is slowly bringing more unchanged acid chloride to the surface. Where it reeks.
The sulfonyl chlorides tend to be solids, which makes the problem that much worse. The crystals don’t do the stainless-steel thing and form a reacted skin around them that seals up the inside. No, for all I can tell, tosyl chloride (the prototype sulfonyl chloride, found in organic labs around the world) will stink indefinitely. I’ve no idea of what its nose-wrinkling, headache-inducing half-life is, just that it’s very long indeed.
At least its hydrolysis product, toluenesulfonic acid, doesn’t smell. It won’t improve whatever its standing on, true, but at least you won’t know it’s there from across the room. But those oily liquid carboxylic acid chlorides stink horribly as their free acids, too, so over time, if you’re so inclined, you can note the changeover from the musty, acrid smell of the chloride to the rancid, goaty stench of the parent acid. The midpoint of the process is a treat.
So, you lazy chemists, break down and clean the stuff up. It’s not going to get any better unless you put some energy into the system (in the form of hands, elbows, and paper towels). All of our problems should clean up so well.
+ TrackBacks (0) | Category: Life in the Drug Labs
It’s useful to be reminded every so often of how much you don’t know. There’s a new paper in PNAS that’ll do that for a number of its readers. The authors report a new protein, one of the iron-sulfur binding ones. There are quite a few of these known already, so this wouldn’t be big news by itself. But this one is the first of its kind to be found in the outer mitochondrial membrane, which makes it a bit more interesting.
It also has a very odd structure – well, odd to us humans anyway, for all we know things like this are all over the place and we haven’t stumbled across one until now. There’s a protein fold here which not only has never been seen in the 650 or iron-sulfur proteins with solved structures, it’s never been seen in any protein at all. That’s worth a good publication, for sure.
The part that’ll really throw people, though, is that this protein (named mitoNEET, for the amino acids that make up its weird fold) binds a known drug whose target we all thought we already knew. Actos (pioglitazone) turns out to associate with it, which is a very interesting surprise. We already knew the glitazones as PPAR-gamma ligands. We didn’t understand them as PPAR ligands (no one understands them very well, despite many years and many, many scores of millions of dollars), but that was generally accepted as their site of action.
And now there’s another one, which is going to make the pioglitazone story even more complex. Reading between the lines of the paper, I get the strong impression that the authors were fishing for another pioglitazone binding site, using modified versions of the drug to label proteins, and hit the jackpot with this one. (And good for them - that's a hard technique to get to work). There’s been some speculation that the compound might have effects on mitochondria that wouldn’t necessarily be PPAR-mediated, and this is strong circumstantial evidence for it.
What’s more, I can’t think of any other iron-sulfur proteins that are targets of small molecules. Just last week, I was talking about the diversity of binding sites and interactions that we haven’t explored in medicinal chemistry, and here’s an example for you.
This paper raises a pile of questions: what does mitoNEET do? Shuttle iron-sulfur complexes around? (If so, to where, and to what purpose?) Is it involved in diabetes, or other diseases of metabolism? Does pioglitazone modify its activity in vivo, whatever that activity is? How well does it bind the drug, anyway, and what does the structure of that complex look like? Does Avandia (rosiglitazone) bind, too, and if not, why not? Are there other proteins in this family, and do they also have drug interactions that we don’t know about? Ah, we’ll all be employed forever in this business, for as long as people can stand it.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity
September 2, 2007
I notice that the first marketed renin inhibitor seems to be doing fairly well. That's an interesting phrase, "first marketed renin inhibitor". . .
This is a good example of what drug discovery can be like. Renin is a fine drug target – it’s been known for a long time as a key component of blood pressure regulation, and that’s a condition affecting a huge market whose treatment provides a real medical benefit. What more do you want?
OK, let’s make it even more attractive. It’s not that hard to set up a renin assay, and the protein is well-studied. The counterscreens and secondary assays are not a problem; hypertension is fairly well understood. And if you screen for renin inhibitors, you generally find chemical matter to start off with, too. Protease inhibitors vary quite a bit in their drug-likeness, but they’re certainly not impossible on the face of them.
But even after all this, I would not like to be asked to count how many renin inhibitors have been reported over the years, never to be seen again. The first reports I can find go back to the early 1980s. Given the lead time for these things, I can safely assume that these compounds were being made around the time I went the my high school Junior Prom (theme: “Saturday Night Fever”, natch – it was 1978, after all). And here we are in 2007, and the first one has finally made it to market. It wasn't easy, either - the compound was left for dead years ago, and was only kept going by some ex-Novartis people who started their own company and licensed the compound back to Novartis when it finally made it through the rough spots.
So, what’s the problem? Many compounds have been done in by poor behavior in living models (distribution, absorption, and so on). Getting oral bioavailability in this area has been a lot harder than anyone thought, and even the current drug is no great winner in that category. Projects start and stop, difficulties occur, and the years go by. And other mechanisms for going after hypertension have, of course, come to market, starting with the ACE inhibitors (which come from roughly the same disco era as the first run of renin compounds). They took the gigantic market that an early-1980s renin inhibitor would have had, but even so, I don’t think a year has gone by since that someone in the industry hasn’t been working on one. (There's still room to think that a renin compound would have a better profile than the existing drugs, though). And here we are: 2007. A sobering thought, that is.
+ TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Drug Industry History