About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
June 29, 2012
Has there ever been a less structurally appealing class of drugs than the cholesteryl ester transfer protein (CETP) inhibitors? Just look at that bunch. From left to right, that's Pfizer's torcetrapib (which famously was the first to crash and burn back in 2006), Roche's dalcetrapib (which was pulled earlier this year from the clinic, a contributing factor to the company's huge recent site closure), Merck's anacetrapib (which is forging on in Phase III), Lilly's evacetrapib (which when last heard from was also on track to go into Phase III), and a compound from Bristol-Myers Squibb, recently published, which must be at least close to their clinical candidate BMS-795311.
Man, is that ever an ugly-looking group of compounds. They look like fire retardants, or something you'd put in marine paint formulations to keep barnacles from sticking to the hull. Every one of them is wildly hydrophobic, most are heavy on aromatic rings, and on what other occasion did you ever see nine or ten fluorines on one drug molecule? But, as you would figure, this is what the binding site of CETP likes, and this is what the combined medicinal chemistry talents of some of the biggest drug companies in the world have been driven to. You can be sure that they didn't like it, but the nice-looking compounds don't inhibit CETP.
Will any of these fancy fluorocarbon nanoparticles make it through to the market, just on properties/idiosyncratic toxicity concerns alone? How do their inhibitory mechanisms differ, and what will that mean? Is inhibiting CETP even a good idea in the first place, or are we finding out yet more fascinating details about human lipoprotein handling? Money is being spent, even as you read this, to find out. And how.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
There are a number of structures that I've never been quite able to make up my mind about in medicinal chemistry. One of those is the pyridine N-oxide.
You really don't see those in drugs (at least, no examples come to mind), but you don't see many people trying to advance them as drugs, either. Note: the first comment points out the two key examples I'd forgotten: librium and minoxidil. Once in a while they turn up in the literature, often never to be seen again. I believe that one problem with them is that they present in a living system as mild oxidizing agents, which is the sort of thing that cells try to avoid, and I can't imagine that their pharmacokinetics are very appealing either. There are quite a few pyridine derivatives that are turned into their N-oxides on the way to being excreted, which makes you think that bringing one in from the the start is greasing the skids for fast clearance. But I've never seen one dosed, so how would I know for sure?
These thoughts are prompted by this paper from J. Med. Chem., which has an even stranger-looking benzotriazine bis-oxide. These compounds seem quite active against drug-resistant tuberculosis strains (and it's always good to see something that can kill those guys off), but I'll watch with interest to see if they can be developed into drugs. Anyone else out there ever had the nerve to push an N-oxide forward?
+ TrackBacks (0) | Category: Life in the Drug Labs
June 28, 2012
If you're looking for an R. B. Woodward-themed decoration for you lab, look no further: Chemjobber has you covered.
+ TrackBacks (0) | Category: Blog Housekeeping
Over at Forbes, Matthew Herper has some thoughts now that the major parts of the Affordable Care Act have been upheld. Among them is this on its effect on the pharma business:
Will the law actually benefit some drug companies? Many in the drug business have expressed regret about the decision to back the Affordable Care Act, even blaming former Pfizer chief Jeffrey Kindler, a Democrat, for having pushed a deal through. I think that some of this opposition is based on outdated thinking that says that even though the government already pays for a lion’s share of health care spending through Medicare and Medicaid, giving it even more control will eventually create price controls like in Europe.
This made sense when the industry made all of its money selling mass market pills such as Lipitor and Plavix, both now off-patent. But the model for many new cancer drugs (the biggest category in drug company pipelines) and for drugs for rare diseases is that the companies charge a price no individual can pay, and then try to get insurers and governments to pay for them. This is the basic strategy taken by companies like Alexion, Biomarin, and the Genzyme division of Sanofi, all of which charge hundreds of thousands of dollars per patient per year for there medicines. Getting more people insured is good for these companies. Right now Alexion and Biomarin are down, which makes little sense. Fundamentally, the success of the drug industry depends on inventing new medicines; at most, the law is neutral. . .
We'll see. I think that the high-price/low-patient-population strategy that Herper refers to will be up for revision at some point, and perhaps sooner than we expect. One of the selling points of the ACA/Obamacare was that it would (somehow) contain costs, and I still have a lot of trouble believing that it will do anything of the kind. If (when?) we find that we're still spending piles of money on health care, one of the more politically popular ways to cut costs (or at least look as if you're cutting costs) will be to go after therapies that cost six figures a year.
And this could get tricky, because any cancer drugs that are actually effective are likely to be so only for small populations (the people who have tumors that are driven by one treatable mutation, as opposed to a swarm of genomically unstable cells that can mutate their way out of attempts to shut them down). The more we learn about which drugs to give to which patients, the smaller the treatable population gets for any individual drug, and the higher the price. These lines have been heading for an intersection for some time now, and I don't see how the health care law will keep things from getting messy.
+ TrackBacks (0) | Category: Business and Markets | Cancer | Regulatory Affairs
You'll remember the life-extending fullerenes paper that I blogged about here, and the various problems with it that sharp-eyed readers here spotted. (These drew comments from the lead author here and here). Now the journal has issued a correction that covers some of these issues, along with the following Editor's Note:
It should be noted that one of these errors, referring to the inadvertent duplication of the same image within two panels of Fig 4, was pointed out to the Editor-in-Chief by several readers. The authors contacted the Editor-in-Chief with an explanation of this error and an error in Figure 3 before he requested an explanation from the authors. This paper draws conclusions that appear counter-intuitive. The Editor-in-Chief received two very detailed reports from referees who indicated that the methodology appeared sound and they both recommended acceptance after some revision. Neither referee nor the Editor-in-Chief noticed either error, and the revised paper was published. Due consideration has been given to the potential effect of these errors on the overall results and conclusions drawn, and so it has been decided the conclusions are still valid. The authors have provided explanations of how the errors were made during the preparation of graphics and images.
The big questions remain - can these results be duplicated, and is anyone willing to try?
+ TrackBacks (0) | Category: Aging and Lifespan
There's no escaping politics and health care policy today. No matter what happens to the Affordable Care Act this morning (and no matter what you think of it either way), if you work in the drug industry, it's worth recalling that PhRMA (the big-company industry association) was very much in favor of the legislation. At least as it was finally passed, that is - there was a lot of quid-pro-quo-ing about drug reimportation and Medicare pricing, and agreement on those appears to have been PhRMA's price for supporting the bill. It was a deal that many objected to at the time, and in one of the few other times I've talked politics on this blog, I wondered if it was going to hold up even at that.
We know more of these details because of a set of e-mails and internal memos that show the group's agreement to advertise in favor of its passage, and to help senators and representatives who voted for it:
“As part of our agreement, PhRMA needs to undertake a very significant public campaign in order to support policies of mutual interest to the industry and the Administration,” according to a July 14, 2009, memo from the Pharmaceutical Research and Manufacturers of America. “We have included a significant amount for advertising to express appreciation for lawmakers’ positions on health care reform issues.”
The goal, the memo said, was to “create momentum for consensus health care reform, help it pass, and then acknowledge those senators and representatives who were instrumental in making it happen and who must remain vigilant during implementation.”
One of the vehicles for this was a coalition (involving PhRMA, the AMA, and others) called "Healthy Economy Now" (HEN), which appears to have been started by White House staffers. None of that is surprising or particularly unusual, but an unusual twist involves the White House's David Axelrod and his former advertising company AKPD. The company was still paying Axelrod at the time, and his son was working there, and it appears that they got a good part of the advertising business that PhRMA and the other funded:
A 2009 PhRMA memo also makes clear that AKPD had been chosen before PhRMA joined HEN. It's also clear that some contributors didn't like the conflict of interest. When, in July 2009, a media outlet prepared to report AKPD's hiring, a PhRMA participant said: "This is a big problem." Mr. Baldick advises: "just say, AKPD is not working for PhRMA." AKPD and another firm, GMMB, would handle $12 million in ad business from HEN and work for a successor 501(c)4.
Well, that's Washington, and no mistake. If you don't sit down at the table and cut a deal with these folks, this sort of thing happens to you. But no matter which way the Supreme Court goes this morning, or what parts of the bill might be struck down, it will affect the drug industry. From PhRMA's standpoint, the current legislation represents the fruits of a great deal of lobbying and arm-twisting (in both directions), a great deal of money, and a great deal of worry about future revenues. This work may be in danger of going partially or wholly for naught. We'll find out at 10 AM.
+ TrackBacks (0) | Category: Current Events
June 27, 2012
If you've been following Shire Pharmaceuticals and their Replagal saga, you've had quite a few twists and turns to keep you occupied. Replagal (agalsidase) is Shire's alternative to Fabrazyme, the Fabry's disease therapy from Genzyme, and during Genzyme's protracted manufacturing troubles, Shire was only to glad to step up in the market in the US. (Replagal hadn't been approved here, but had been on the market in Europe for some years).
So back in March, it was quite a surprise when the FDA turned down Shire's application. And details of why this happened have been scarce - until now? BioCentury is out with a very interesting story, based on what they say is "an unsolicited package of documents that are labeled as FDA briefing materials" from an anonymous source.
If these are on the level, the FDA seems to have had concerns that Replagal's physical characteristics have changed over the years, due to changes in manufacturing (most specifically, the switch to a bioreactor from roller bottles and the use of a different purification method). The content of sialic acid and mannose-6-phosphate, among other factors, seems to be an issue, and cellular uptake of the final product may have drifted downwards. The FDA seems to have decided that the only way to answer such questions was in the clinic, and that's when Shire seems to have balked.
No one at Shire or the FDA would comment on the authenticity of the documents, as Biocentury says, no one has taken the opportunity to dispute them, either. You wonder just how they walked out of the FDA. . .
+ TrackBacks (0) | Category: Regulatory Affairs
A lot of natural product structures have been misassigned over the years. In the old days, it was a wonder when you were able to assign a complex one at all. Structure determination, pre-NMR, could be an intellectual challenge at the highest level, something like trying to reconstruct a position on a chess board in the dark, based on acrostic clues in a language you don't speak. The advent of modern spectroscopy turned on the lights, which is definitely a good thing, but many people who'd made their careers under the old system missed the thrill of the old hunt when it was gone.
But even now, it's possible to get structures wrong - even with high-field 2-D NMR, even with X-ray spectroscopy. Natural products can be startlingly weird by the standards of human chemistry, and I still have a lot of sympathy for anyone who's figuring them out. My sympathy goes only so far, though.
Specifically, this case. I have to agree with the BRSM Blog, which says: "I have to say that I think I could have done a better job myself. Drunk." Think that's harsh? Check out the structures. The proposed structure had two napthalenes, with two methoxys and four phenols. But the real natural product, as it turns out, has one methoxy and one phenol. And no napthyls. And four flipping bromine atoms. Why the vengeful spirit of R. B. Woodward hasn't appeared, shooting lightning bolts and breaking Scotch bottles over people's heads, I just can't figure.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Natural Products
June 26, 2012
Big news, and unfortunate news for the New Jersey end of the pharma business: Roche is closing down their entire site in Nutley. That's a loss of 1000 jobs, and an end to a research site that's been going since the 1930s and which was once a huge presence in the R&D world. The research is going to be picked up by Roche sites in Germany and Switzerland. The company says that it's going to open a smaller translational research center, though:
A location is being identified on the East Coast to focus on translational clinical research to support Roche US-based clinical trials and early development programs, support and maintain Roche interactions with the U.S. Food and Drug Administration (FDA), and enhance Roche's collaborations with US based partners, such as academic institutions and biotech companies. This new center is expected to host around 240 employees.
That sounds like a Boston/Cambridge deal to me, but we'll see. For now, we have a very large closing indeed.
+ TrackBacks (0) | Category: Business and Markets
Nature Reviews Drug Discovery has an article on the current state of drug development, looking at what's expected to be launched from 2012 to 2016. There's a lot of interesting information, but this is the sentence that brought me up short: "the global pipeline has stopped growing". The total number of known projects in the drug industry (preclinical to Phase III) now appears to have peaked in 2009, at just over 7700. It's now down to 7400, and the biggest declines are in the early stages, so the trend is going to continue for a while.
But before we all hit the panic button, it looks like this is a somewhat artificial decline, since it was based on an artificial peak. In 2006, the benchmark year for the 2007-2011 cohort of launched drugs, there were only about 6100 projects going. I'm not sure what led to the rise over the next three years after that, but we're still running higher. So while I can't say that it's healthy that the number of projects has been declining, we may be largely looking at some sort of artifact in the data. Worth keeping an eye on.
And the authors go on to say that this larger number of new projects, compared to the previous five-year period, should in fact lead to a slight rise in the number of new drugs approved, even if you assume that the success rates drop off a bit. They're guessing 30 to 35 launches per year, well above the post-2000 average. Peak sales for these new products, though, are probably not going to match the historical highs, so that needs to be taken into account.
More data: the coming cohort of new drugs is expected to be a bit more profitable, and a bit more heavily weighted towards small molecules rather than biologics. Two-thirds of the revenues from this coming group are expected to be from drugs that are already in some sort of partnership arrangement, and you'd have to think that this number will increase further for the later-blooming candidates. The go-it-alone blockbuster compound really does seem to be a relative rarity - the complexity and cost of large clinical trials, and the worldwide regulatory and marketing landscape have seen to that.
As for therapeutic area, oncology has the highest number of compounds in development (26% of them as of 2011). It's to the point that the authors wonder if there's an "oncology bubble" on the way, since there are between 2 and 3 compounds chasing each major oncology target. Personally, I think that these compounds are probably still varied enough to make places for themselves, considering the wildly heterogeneous nature of the market. But it's going to be a messy process, figuring out what compounds are useful for which cases.
So in the near term, overall, it looks like things are going to hold together. Past that five-year mark, though, predictions get fuzzier, and the ten-year situation is impossible to forecast at all. That, in fact, is going to be up to those of us doing early research. The shape we're in by that time will be determined, perhaps, by what we go out into the labs and do today. I have a tool compound to work up, to validate (I hope) an early assay, and another project to pay attention to this afternoon. 2022 is happening now.
Update: here are John LaMattina's thoughts on this analysis, asking about some things that may not have been taken into account.
+ TrackBacks (0) | Category: Business and Markets | Drug Development
June 25, 2012
Here's another reminder that we don't know what a lot of existing drugs are doing on the side. This paper reports that the kinase inhibitor Nexavar (sorafenib) is actually a pretty good ligand at 5-HT (serotinergic) receptors, which is not something that you'd have guessed at all.
The authors worked up a binding model for the 5-HT2a receptor and ran through lists of known drugs. Sorafenib was flagged, and was (experimentally) a 2 micromolar antagonist. As it turns out, though, it's an even strong ligand for 5-HT2b (57 nM!) and 5-HT2c (417 nM), with weaker activity on a few other subtypes. This makes a person wonder about the other amine GPCR receptors, since there's often some cross-reactivity with small molecule ligands. (Those, though, often have good basic tertiary amines in them, carrying a positive charge under in vivo conditions. Sorafenib lacks any such thing, so it'll be interesting to see the results of further testing). It's also worth wondering if these serotinergic activities help or hurt the drug in oncology indications. In case you're wondering, the compound does get into the brain, although it's significantly effluxed by the BCRP transporter.
What I also find interesting is that this doesn't seem to have been picked up by some of the recent reports on attempts to predict and data-mine potential side effects. We still have a lot to learn, in case anyone had any doubts.
+ TrackBacks (0) | Category: Cancer | Drug Assays | The Central Nervous System | Toxicology
It's not anything to shake the earth, but I'm actually happy to see new variations being discovered for ancient reactions like the Friedel-Crafts. It makes sense that an activated amide could participate in the reaction, but it looks like no one's ever quite explored the idea like this.
And yes, I know that a large Friedel-Crafts can be a pain, what with all that aluminum gunk. The biggest one I ever ran used protic conditions (methanesulfonic acid), so I haven't had the complete experience, but I've still managed to work my way through some gooey aluminum milkshakes. But it's still a useful reaction on the bench scale.
And somehow, I can read a paper like this one and be pleased, while a paper on yet another way to dehydrate an oxime to a nitrile makes me roll my eyes. I'm still trying to work out why that might be - a bit broader scope? More possible utility? Just the fact that this was something that no one had quite thought of, as opposed to another way to take the same starting material to the same product? I should figure out what my boundaries are.
+ TrackBacks (0) | Category: Chemical News
June 22, 2012
OK, it's time to haul the marketing guys back in again. Via Pharmalot, I see that Merck, in its capacity now as Merck Schering-Plough, is promoting Claritin via a tie-in with the kid's movie "Madagascar 3". That is certainly the first time I've ever heard of a drug company co-promoting with a children's movie (or, actually, any movie at all, although I've probably missed an example or two).
And if these reports are true, the promotion is rather extensive, in an eye-rolling cringe-inducing way:
". . .customized Madagascar 3 packaging for both types of Claritin; a “Free Movie Ticket Offer” promotion with a Claritin purchase at Walgreens; the Claritin Facebook page offers a free, downloadable Madagascar Inspired Circus Activity Guide and a Madagascar themed “Circus Stackers” game; eight activity guides for free download from Facebook, and product packaging that included “5 Free Stickers” of Madagascar characters.
Merck also initiated “Children’s Claritin Mom Crew” members to hold Madagascar-themed viewing parties. Mom Crew members are bloggers who have been selected by Merck to be product endorsers, the letter states. [The Public Health Advocacy Institute at Northeastern], in fact, says it ran a Google search using the terms “Claritin mom crew Madagascar.” Of the first 40 search results, 31 were unique accounts of Children’s Claritin Madagascar viewing parties held by Claritin Mom Crew members from across the country.
Well, I just did that same search, and believe me, it's no longer the case. The Pharmalot post is now the first search result, and most of the others are unfavorable publicity in the same vein. But there certainly are accounts from the (excuse me while I hold my nose) "Claritin Mom Crew" in the results, although I'm having a lot of trouble believing that these are real blog posts that emerged from spontaneous human action. This really smells like a planned campaign, with close attention paid to phrasing, linking, and other search engine optimization techniques. For one thing, I note that every mention of this thing is carefully capitalized, and there's even a standard Twitter hashtag.
But maybe my ideas of "spontaneous human action" need to be a bit broader. There's been a long-standing technique of spreading endorsements via compensated blog posts (money, coupons, discounts, affiliate percentages), and there are surely many here's-what-I-do-with-my-kids sites that exist partly (or wholly) to reap the benefits from all these promotions. I note that at least one of these blogs is now feeling the backlash from all the negative publicity in this case, and they're probably not alone. (Roll with it, I say - after all, think of all the extra traffic you're getting!)
But this whole promotion is a rotten idea, although I suppose that you'd have to be a marketing whiz for that not to at least cross your mind. How anyone could have planned it and launched it without realizing that this backfire reaction was exactly what would surely happen is beyond me. And sure, maybe they're going to sell some more children's Claritin, briefly. But how much more, compared to all the negative PR? Compared to headline after headline that makes Merck look like the sort of organization that has no problem using cartoon character tie-ins to sell histamine receptor antagonists to kids?
Hey, why not? After all, Merck - or at least their marketing department - clearly is the sort of organization that has no problem with that at all. Own it guys - stand up and be proud. I'm sure you can manage it.
+ TrackBacks (0) | Category: Business and Markets | Why Everyone Loves Us
June 21, 2012
Now here is a piece on scientific literacy that I find interesting. The author, Daniel Sarewitz, is wondering why so many people equate it with knowing facts:
We have this belief that unless a person knows that the Earth rotates around the sun and that birds evolved from dinosaurs, she or he won’t be able to exercise responsible citizenship or participate effectively in modern society. Scientists are fond of claiming that literacy in their particular area of expertise (such as climate change or genomics) is necessary so “the public can make informed judgments on public policy issues.”
Yet the idea that we can say anything useful at all about a person's competence in the world based on their rudimentary familiarity with any particular information or type of knowledge is ridiculous. Not only is such information totally disembodied from experience and thus no more than an abstraction (and an arbitrary one at that), but it also fails to live up to what science ultimately promises: to enhance one's ability to understand and act effectively in a world of one’s knowing.
This point has often troubled me. I recall Richard Feynman's attempt to reduce the key insights of physics down to a single sentence. ("If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that: all things are made of atoms-little particles that that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied") And I still can't help thinking that some basic scientific knowledge about the world is an essential part of anyone's mental furniture.
But where to stop? This is the slippery "physics for poets" problem, and I don't think it's ever been solved. Yes, everyone should know that things are made out of atoms, and that there are only a certain number of different kinds of atoms. And I'd like for people to know that living things are mostly just made out of eight or ten of those, with carbon being the most important. But at that point, are we already getting close to the borderline between knowledge and trivia? What should people know about carbon? About atomic bonds? About biomolecules? I'd like for people to know roughly what DNA is, and what proteins are, and what carbohydrates are (other than "stuff that's in food"). But in how much detail? The details multiply very, very quickly.
The same goes for any other science. A hobby of mine is astronomy, and I certainly think that everyone should know that the Earth and the other planets go around the sun, with moons that go around many of them. I'd like for them to know that the other stars are things much like our sun, and very much further away. But should people know about red giants and white dwarves and supernovas? I'd like for people to know that Jupiter is a big planet, with moons. But how many moons? Should they know the names of the Galilean satellites or not? And what good would it do them if they did?
Ah, you say, science literacy should focus not so much on the mass of facts, but on the process of doing science itself. It's a way of looking at (and learning about) the world. And I agree with that, but Sarwitz isn't letting that one off easily, either:
A more sophisticated version of science literacy that focuses not on arbitrary facts but on method or process doesn't help much, either. The canonical methods of science as taught in the classroom are powerful because they remove the phenomenon being studied from the context of the real world and isolate it in the controlled setting of the laboratory experiment. This idealized process has little if any applicability to solving the problems that people face on a daily basis, where uncertainty and indeterminacy are the rule, and effective action is based on experience and learning and accrued judgment. Textbook versions of scientific methods cannot, for example, equip a nonexpert to make an informed judgment about the validity or plausibility of technical claims made by experts.
This is overstated (I hope). The scientific technique of isolating variables is key to troubleshooting of all kinds, all the way down to problems like why the toaster oven isn't coming on. (Problem with the switch? Problem with the cord? Problem with the plug? Problem back at the circuit breaker?) And the concept of reproducibility has broad application as well. But it's true that school curricula don't always get this things across.
One of the responses to the article brings up an interesting analogy - music. There's being able to listen to music, and decide if you like it or not, or if it does anything for you. Then there's being able to read sheet music. And there's being able to play an instrument yourself, and past that, the ability to compose. When I say that I'd like for more people to know more about science, I think that I'm asking for more people to be able to the hear the music that I hear. But is that really what it means?
+ TrackBacks (0) | Category: Who Discovers and Why
Chemistry moves on, and it doesn't always take everything with it. There are reagents and reactions that used to be all over the literature, but have fallen out of use, superseded by easier or more reliable alternatives. The first thing I think of in this category is pyridinium chlorochromate (PCC), which I wrote about here. That was all the rage in the late 1970s and into the 1980s, but I don't know when I've last seen a bottle of the stuff.
And since that post itself is seven years old now, I wanted to throw the floor open again for a discussion of dead reagents and dusty reactions. There are plenty of obscure ones, of course, and plenty that don't get much use but still have their place in special situations. But I'm wondering about the ones that used to be big and now are disappearing. What are some that you used to use, but never expect to again?
For my part, other than PCC, I don't ever see doing a vanadium-catalyzed epoxidation, even though I did a few in grad school. And I recall doing a Jones oxidation - does anyone use that one any more? Another reagent that had a vogue in the late 1980s and early 1990s, but I don't recall seeing any time recently, was tris(trimethylsilyl)silane (a replacement for tri-n-butyltin hydride). So those are my nominees - what else?
+ TrackBacks (0) | Category: Life in the Drug Labs
June 20, 2012
If you're looking for the master list of business-speak cliches, look no further. That article has 89 of them, and if you can get to the end without shedding neurons from your frontal lobes, you're tougher than I am. (Note that the author of the piece recognizes this danger himself). It's a value-added, win-win paradigm shift, net-net, at the end of the day, sure to move the needle going forward. Dang. There go some of those neurons right now.
+ TrackBacks (0) | Category: Business and Markets
Here's a rather testy letter to the editors of The Lancet about some recent work published there by Novo Nordisk and collaborators.
Both trials produce the same finding. . .Each focuses its main conclusion not on this primary outcome, but on one of several secondary measurements: nocturnal hypoglycaemia in the first paper and overall hypoglycaemia in the second. In both, the difference was of marginal significance and no mention is made of adjustment for multiple testing. These lower hypoglycaemia rates in unblinded studies should be considered, at best, hypothesis generating. At worst they are spurious. . .
The Lancet's reprints are a major source of revenue for the journal, and a major part of drug company marketing. These trials were written and analysed by NovoNordisk statisticians and NovoNordisk-funded professional writers. We applaud their skill, but regret the lack of editorial effort deployed to balance it. . .
"What are editors for?", asks the letter. This brings up something that we all may have to contend with if the scientific publishing model continues to change and erode. The publishers themselves make much of their status as gatekeepers, citing their coordination of the peer review process and their in-house editing. (The counterarguments are that the peer review is being done by free labor, and not always very effectively, and that the quality of the in-house editing varies from "pretty good" to "surely you jest").
These papers are a case in point. What if they are, as the letter writers contend, largely just vehicles for marketing? That sort of thing certainly does happen. Will it happen even more under some new scientific publishing system? You'd have to think that the marketing folks are wondering the same thing, but from the standpoint of a feature rather than a bug.
Marketing, though, would rather have papers to point at that are published in a prestigious journal, which is one reason that letter is being sent to The Lancet. And no matter what sort of publishing model comes along, I don't think that we're ever going to get rid of prestige as a factor, human nature being what it is. (And beyond that, having a stratum of recognizably prestigious journals does have its uses, although its abuses can outweigh them). It is, in fact, the prestige factor that's keeping the current system afloat, as far as I can see.
The only thing I can think of to replace it that wouldn't be as vulnerable to the same abuses would be one where papers float to the top through reader comments and interest. Upvotes, downvotes, number of comments and replies, number of downloads and page views - these might end up as what people point to when they want to show the impact of their papers, along with the traditional measures based on citations in other papers. But while that might avoid some of the current problems, it would be open to new ones, various ways of gaming the system to boost papers beyond where they naturally would end up (and to send rival work down the charts as well?) There's also the problem that the most-discussed papers aren't a perfect proxy for the most important ones. A harder-to-comprehend paper, made that way either through its presentation or through its intrinsic subject matter, will make less headway. And deliberately buzzy, controversial stuff will rise faster and higher, even if it's not so worthwhile on closer inspection.
It's probably impossible to come up with a system that can't be gamed or abused. I won't miss the current one all that much, but we'll have to be careful not to replace it with something worse.
+ TrackBacks (0) | Category: The Scientific Literature
June 19, 2012
Those of you who are fans of high-throughput reaction discovery have another paper to check out - and those who aren't have another reason to grit your teeth. (Previous examples here, here, and here). The authors, a collaboration between the Bellomo lab at Penn and the Merck process group at Rahway, have gotten the reaction screen size down to 20 microliters with 1 mg of compound, which allows you to go through 96-well plates pretty rapidly.
Their test bed was a pyrimidone synthesis reaction. They screened 475 reaction conditions (95 different additives/catalysts in 5 different solvents). Each of them got one hour at 60 C to show what it could do, and the entire analysis was completed in one day. A phenantholine/copper bromide catalyst in dioxane showed the best results from that run, so it was taken to a separate series of experiments, to see how low the loading could go, what similar solvents might work out, how low they could push the reaction temperature, and so on. As it turned out, 2-methyl THF at RT overnight with 5% catalyst gave an 84% yield, which already represented a significant improvement over the known conditions (which used no catalyst at 140 C).
Moving to different substrates, they found that these conditions gave product each time, but in varying yields. A further catalyst screen, with 112 phosphine ligands, gave another set of conditions that could also be applied to the diverse substrates. A re-screen of solvents together with the best phosphine gave (along with the initial optimization conditions) high-yielding reactions for each of the new substrates. There was no set of one-size-fits-all conditions, though, which certainly fits with organic synthesis as I know it.
With these in hand, they did some work on the mechanism. It appears that the copper is participating in a single-electron transfer reaction, but further details aren't clear. It's not the sort of thing that you would have been able to think your way to on a blackboard, which (to me) is the whole point of doing chemistry this way. As the authors put it:
We envision that similar, generally useful platform tools will soon become more widely available, thus dramatically impacting chemistry development and enabling increased access to chemical diversity and lower-cost synthesis. Most importantly, we believe that such platforms will lead to the discovery of new and potentially useful chemical reactivity and reaction mechanisms.
Exactly. We should be finding easier ways to make compounds, and new ways to make compounds we've never been able to prepare. I think that searching for them in this way is an efficient way to do that, and will also open up new areas of research as we stumble across things we never realized were even there. If there's a downside here, I'm not seeing it.
+ TrackBacks (0) | Category: Chemical News
I've written here before about the (now) well-known problem of compound aggregation in screening assays. You can get false positives when a compound itself forms a colloidal mass that pulls the target protein into it. The readout looks as if the protein has been inactivated by the small molecule itself, just the way you were hoping for, but if you add a bit of detergent the activity mysteriously goes away.
The Shoichet lab has a paper out that warns people to look out for this in cellular assays as well. This time you'll get false negatives - the colloidal aggregates don't act right compared to the free molecules, as you could well imagine. Update: I see that Wavefunction has covered this same paper! Reformulating the assay restores the activity, but the trick is knowing that there was a problem to start with. Something to keep in mind when your cell assay numbers are wonky (as if there weren't enough reasons for that to happen already).
+ TrackBacks (0) | Category: Drug Assays
June 18, 2012
I sure hope that Sanofi doesn't really want to own these compounds in this recent patent filing. (Thanks to a reader at another company for sending this along!) But what are the odds of that, given that they went to all the trouble of filing on them?
The reason I say that is while the compounds are drawn correctly in the experimental section (the part that was done by the chemists), the description and the claims (which were done by the lawyers) have the wrong structure in them. Wrong, as in doesn't match the experimentals, and wrong as in doesn't exist, either. The patent is directed towards indolizine derivatives, but the drawings in the claims are not indolizines, but are rather some sort of charged species whose name I do not exactly have at the tip of my tongue. Not that they're drawn any charges on anything, naturally, even though all those nitrogens have four bonds on them.
Does this invalidate the patent? Probably not, although I am most certainly not a lawyer. The names, as written, still seem to be fine. Does this complicate efforts to defend the chemical matter? You'd have to think so. Does it make people at Sanofi look like they either don't know much chemistry, aren't paying much attention, or both? Absolutely.
+ TrackBacks (0) | Category: Patents and IP
My recent post here on whether the US needs a big influx of scientists and engineers has attracted some attention. Discover magazine asked to reprint it on their site, and then Slate asked if I would write a response for them expanding my thoughts on the subject, which is now up here.
It feels odd for me, as a scientist, to be taking this side of the issue. I even think that not enough people know enough science and mathematics, and would like for these subjects to be taught better than they are in schools. But there's something about the attitude that "America needs more scientists, even mediocre ones" that really doesn't sit right with me. Science, and scientists, aren't like coal. We can't be stored for later use, nor hauled around to do whatever job it is that Generic Scientists are needed to do. It's messier than that, as a look at some of the science and technology industries (like the one I work in) might illustrate.
+ TrackBacks (0) | Category: General Scientific News | Who Discovers and Why
June 15, 2012
The biggest pharma companies increasingly seem to feel as if they need universities nearby. We've talked about this trend before, and Pfizer's current strategy makes it quite clear.
Partnerships between industry and academia, of course, aren’t new. Yet Pfizer, Sanofi, Merck & Co. (MRK) and other drug companies are putting a new twist on the arrangement by stepping up their level of collaboration with universities. In the case of Pfizer, the world’s largest drug company is embedding operations in Boston, San Francisco, New York and San Diego, often in the very same buildings where famed academic institutions have labs.
“No matter how much money you have, nothing compares to the innovation going on out in the world,” said Jose Carlos Gutierrez-Ramos, the director of the [new Pfizer lab in Cambridge], in an interview. “We want to be here, integrated into this fabric.”
Right. As I said earlier, I can definitely see the benefit to putting your research center in Cambridge or South San Francisco as opposed to Duluth or Reno. There are a lot of qualified people in the area who might be interested in moving over to join you, for one thing, and for small companies, that's where the (knowledgeable) money tends to hang out. But I still wonder about this cozy-up-to-the-academic-luminaries approach. Pfizer, for example, is making a big deal out of collaborating with Harvard, and their vision of how this is going to work doesn't quite fit into reality as I've come to know it:
Gutierrez-Ramos said he is trying to create an atmosphere at the lab where outside researchers easily come and go, and Pfizer’s scientists visit neighboring academicians on their turf.
Pharmaceutical companies, which historically are highly secretive about their work because of competition, need to be willing to take more risks in the future, he said, creating access to its inner sanctums to develop drugs earlier.
What Pfizer offers academic researchers are “extraordinary” resources for drug development that nearby university labs can’t match, said Harvard’s [Hal] Dvorak.
The problem with all this my-lab-is-your-lab stuff is that money gets involved. Don't think Harvard doesn't appreciate that, either - anyone who imagines a big pharma company snookering the unworldly Harvard Square luftmenschen should go try to do a deal with the university's technology transfer people. Undervaluing the worth of its own research is not one of Harvard's problems. And matters of intellectual property get involved, too pesky little matters that lead to Jarndyce v. Jarndyce style lawsuits. No, I have trouble imagining people breezing in and out of each other's labs like some sort of drug-discovery effort set in the Seinfeld universe.
What's interesting is that stories like the one I've linked to say that the drug companies are doing this because money is tight, and they need new revenue streams - thus the collaborations. And the universities are doing it because money is tight, and they need new revenue streams. The only way money is going to come out of these deals in order to fulfill both those expectations is for new drugs to be discovered and marketed, and that's a ten-to-fifteen year process. For now, the money is flowing from the drug industry towards academia.
Let's hope that the success rate of the targets improves. Don't get me wrong - I think that collaborations with academia can be useful, and I'm all for both groups getting to understand each other more. But I wonder if people are building expectations up a bit too much, too soon.
+ TrackBacks (0) | Category: Academia (vs. Industry)
June 14, 2012
Via ChemJobber, here's a quote from the National Research Council's Committee on Challenges in Chemistry Graduate Education. Their report has just come out, and I agree that this should be a key point for people to ponder:
Whitesides believes that the question should be asked whether PhD theses are narrow technical presentations for jobs that no longer exist. Should U.S. graduate students be doing organic synthesis if most organic synthesis is being done in China? “That’s not to say that these aren’t really important activities, but we need to connect our investment in graduate school with what’s actually needed to give jobs to students.”
It's worth remembering that Whitesides hasn't exactly been the biggest booster of traditional organic synthesis over the years, he does have a point. This may not be the right way to look at the situation, but if it hasn't crossed your mind, you haven't thought hard enough about the issues yet. I have a couple of quick responses:
1. There are all kinds of organic synthesis. I don't think that there's much point to the human-wave-attack style of making gigantic natural products, as I've said here several times. And if there's not much point to what's considered the highest level of total synthesis, then there must really not be much to the low levels of the field. Those are the papers I'd characterize as "Here's a molecule that no one much cares about, made in a way that you'd figure would probably work, using reactions everyone already knows". But there's more to the field than that; at least, there'd better be.
2. Prof. Whitesides is exaggerating to make a point. It's not like there's no organic synthesis being done in the U.S. A lot of the stuff that's moved to China (and India) is routine chemistry that's being outsourced because it's cheap (or has been cheap, anyway). As that changes, the costs go up, and we head towards a new equilibrium. It seems beyond doubt that there are fewer people doing industrial organic chemistry than there used to be in this country, but it's not like it's only found in China (or will be).
3. That said, he's absolutely right that people need to think about where the jobs are, lest chemistry (and some other sciences) go the way of some of the humanities graduate programs. If you go off and get a doctorate in English with a dissertation on minor 18th-century poets, you're mostly qualified to teach other people about minor 18th-century poets so they can go off and write dissertations of their own. (Actually, your own work would probably have concentrated on the relation of said poets to prevailing gender norms or something, in which case I really don't see the point). We do not want to teach people to do organic chemistry if the majority of them are going to have to seek jobs teaching other people to do organic chemistry.
4. Doing that - thinking about the larger economic and scientific context - is hard. The time it takes to get a degree means that the situation could well have changed by the time a person gets out of grad school, compared with the way things looked when they made the decision to go. But this has always been the case; that's life as we know it. People have to keep their eyes open and be intelligent and flexible, because there are potential dead ends everywhere. As hard as that advice is to follow, though, I still think it's better than any sort of scheme to allocate/ration people among different fields of study. My bias against central planning isn't just philosophical; I don't see how it can possibly work, and it is very, very likely to make the situation even worse.
I'm on the train, and can't download a 120-page PDF at the moment, but I'll have a look at the report and add more thoughts as they come up.
+ TrackBacks (0) | Category: Business and Markets | Graduate School | Natural Products
June 13, 2012
I wanted to highlight a couple of recent examples from the literature to show what happens (all too often) when you start to optimize med-chem compounds. The earlier phases of a project tend to drive on potency and selectivity, and the usual way to get these things is to add more stuff to your structures. Then as you start to produce compounds that make it past those important cutoffs, your focus turns more to pharmacokinetics and metabolism, and sometimes you find you've made your life rather difficult. It's an old trap, and a well-known one, but that doesn't stop people from sticking a leg into it.
Take a look at these two structures from ACS Chemical Biology. The starting structure is a pretty generic-looking kinase inhibitor, and as the graphic to its left shows, it does indeed hit a whole slew of kinases. These authors extended the structure out to another loop of the their desired target, c-Src, and as you can see, they now have a much more selective compound.
But at such a price! Four more aromatic rings, including the dread biphenyl, and only one sp3 carbon in the lot. The compound now tips the scales at MW 555, and looks about as soluble as the Chrysler building. To be fair, this is an academic group, which mean that they're presumably after a tool compound. That's a phrase that's used to excuse a lot of sins, but in this case they do have cellular assay data, which means that despite this compound's properties, it's managing to do something. Update: see this comment from the author on this very point. Be warned, though, if you're in drug discovery and you follow this strategy. Adding four flat rings and running up the molecular weight might work for you, but most of the time it will only lead to trouble - pharmacokinetics, metabolic clearance, toxicity, formulation.
My second example is from a drug discovery group (Janssen). They report work on a series of gamma-secretase modulators (GSMs) for Alzheimer's. You can tell from the paper that they had quite a wild ride with these things - for one, the activity in their mouse model didn't seem to correlate at all with the concentration of the compounds in the brain. Looking at those structures, though, you have to think that trouble is lurking, and so it is.
"In all chemical classes, the high potency was accompanied by high lipophilicity (in general, cLogP >5) and a TPSA [topological polar surface area] below 75 Å, explaining the good brain penetration. However, the majority of compounds also suffered from hERG binding with IC50s below 1 μM, CyP inhibition and low solubility, particularly at pH = 7.4 (data not shown). These unfavorable ADME properties can likely be attributed to the combination of high lipophilicity and low TPSA.
That they can. By the time they got to that compound 44, some of these problems had been solved (hERG, CyP). But it's still a very hard-to-dose compound (they seem to have gone with a pretty aggressive suspension formulation) and it's still a greasy brick, despite its impressive in vivo activity. And that's my point. Working this way exposes you to one thing after another. Making greasy bricks often leads to potent in vitro assay numbers, but they're harder to get going in vivo. And if you get them to work in the animals, you often face PK and metabolic problems. And if you manage to work your way around those, you run a much higher risk of nonspecific toxicity. So guess what happened here? You have to go to the very end of the paper to find out:
As many of the GSMs described to date, the series detailed in this paper, including 44a, is suffering from suboptimal physicochemical properties: low solubility, high lipophilicity, and high aromaticity. For 44a, this has translated into signs of liver toxicity after dosing in dog at 20 mg/kg. Further optimization of the drug-like properties of this series is ongoing.
Back to the drawing board, in other words. I wish them luck, but I wonder how much of this structure is going to have to be ripped up and redone in order to get something cleaner?
+ TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | Pharmacokinetics | Toxicology
June 12, 2012
One of the major worries during a clinical trial is toxicity, naturally. There are thousands of reasons a compound might cause problem, and you can be sure that we don't have a good handle on most of them. We screen for what we know about (such as hERG channels for cardiovascular trouble), and we watch closely for signs of everything else. But when slow-building low-incidence toxicity takes your compound out late in the clinic, it's always very painful indeed.
Anything that helps to clarify that part of the business is big news, and potentially worth a lot. But advanced in clinical toxicology come on very slowly, because the only thing worse than not knowing what you'll find is thinking that you know and being wrong. A new paper in Nature highlights just this problem. The authors have a structural-similarity algorithm to try to test new compounds against known toxicities in previously tested compounds, which (as you can imagine) is an approach that's been tried in many different forms over the years. So how does this one fare?
To test their computational approach, Lounkine et al. used it to estimate the binding affinities of a comprehensive set of 656 approved drugs for 73 biological targets. They identified 1,644 possible drug–target interactions, of which 403 were already recorded in ChEMBL, a publicly available database of biologically active compounds. However, because the authors had used this database as a training set for their model, these predictions were not really indicative of the model's effectiveness, and so were not considered further.
A further 348 of the remaining 1,241 predictions were found in other databases (which the authors hadn't used as training sets), leaving 893 predictions, 694 of which were then tested experimentally. The authors found that 151 of these predicted drug–target interactions were genuine. So, of the 1,241 predictions not in ChEMBL, 499 were true; 543 were false; and 199 remain to be tested. Many of the newly discovered drug–target interactions would not have been predicted using conventional computational methods that calculate the strength of drug–target binding interactions based on the structures of the ligand and of the target's binding site.
Now, some of their predictions have turned out to be surprising and accurate. Their technique identified chlorotrianisene, for example, as a COX-1 inhibitor, and tests show that it seems to be, which wasn't known at all. The classic antihistamine diphenhydramine turns out to be active at the serotonin transporter. It's also interesting to see what known drugs light up the side effect assays the worst. Looking at their figures, it would seem that the topical antiseptic chlorhexidine (a membrane disruptor) is active all over the place. Another guanidine-containing compound, tegaserod, is also high up the list. Other promiscuous compounds are the old antipsychotic fluspirilene and the semisynthetic antibiotic rifaximin. (That last one illustrates one of the problems with this approach, which the authors take care to point out: toxicity depends on exposure. The dose makes the poison, and all that. Rifaximin is very poorly absorbed, and it would take very unusual dosing, like with a power drill, to get it to hit targets in a place like the central nervous system, even if this technique flags them).
The biggest problem with this whole approach is also highlighted by the authors, to their credit. You can see from those figures above that about half of the potentially toxic interactions it finds aren't real, and you can be sure that there are plenty of false negatives, too. So this is nowhere near ready to replace real-world testing; nothing is. But where it could be useful is in pointing out things to test with real-world assays, activities that you probably hadn't considered at all.
But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too. That split, "stupid delay versus crucial red flag", is at the heart of clinical toxicology, and is the reason it's so hard to make solid progress in this area. So much is riding on these decisions: you could walk away from a compound, never developing one that would go on to clear billions of dollars and help untold numbers of patients. Or you could green-light something that would go on to chew up hundreds of millions of dollars of development costs (and even more in opportunity costs, considering what you could have been working on instead), or even worse, one that makes it onto the market and has to be withdrawn in a blizzard of lawsuits. It brings on a cautious attitude.
+ TrackBacks (0) | Category: Drug Development | In Silico | Toxicology
June 11, 2012
Here's a perfectly appropriate response to the Slate piece about needing more scientists: "Dear Slate: America Needs More Artists".
America needs Thomas Kinkades and Andy Warhols, but it really needs a lot more good artists, more expressive artists, more mediocre artists, and more starving artists.
In theory, "artsiness" has never been cooler. America sanctifies Steve Jobs (the iPod designer), and envies da Vinci (the Renaissance man-cum-robotic surgeon). There are hipster sculptors, hipster poets, and hipster, well...hipsters. There's 20x200, an entire industry devoted to finding unknown artists, and letting you buy a slice. And yet, American art is in crisis: in this economy, gigs and commissions are tough to come by. Much of our great art comes from overseas (Italy, Japan, Russia) because there aren't enough artists here at home. And many of our best visual and musical minds are snatched up by mainstream media, producing viral apps (Draw Something) or 'selling out' their musical talents (American Idol).
A crisis indeed. Won't anyone take the time to help?
+ TrackBacks (0) | Category: General Scientific News
The Chinese government recently amended its intellectual property law to allow for compulsory licensing. Similar measures are on the books in many other companies, and it's allowed under international patent law (WIPO) in cases of emergency or threats to public health. India recently did this to Bayer's Nexavar. Thailand has used this provision more than once, and other countries (such as Brazil) have threatened it during negotiations with drug companies.
Pharmalot has more on the story, particularly with respect to Gilead and their dealings with the Chinese government over their HIV therapy Viread. As that piece says, China is particularly well suited (as is India) to follow through on such moves, since both countries have robust pharma manufacturing and generic drug business sectors.
I'm actually surprised that the Chinese government didn't have these provisions in place before, though. It's a useful negotiating tool, and I would expect them to avail themselves of everything available, since they are in such a good position to play hardball. Of course, they also have a huge amount of investment from multinational companies on the other side of such considerations - but they also have that huge market that the companies want access to. My guess is that last factor will, in the end, trump everything. There are many drugs, and many drug companies, but there's only one Chinese market. And the only way to that market is through China's one government, which means that companies (and not just drug companies) will continue to smile through gritted teeth and put up with pretty much anything.
+ TrackBacks (0) | Category: Business and Markets | Patents and IP
Unfortunately, I'm hearing talk of a big layoff at Sunovion (neé Sepracor). Anyone have any details? Their business has been in developing trouble for some time now. . .
+ TrackBacks (0) | Category: Business and Markets
June 8, 2012
I gave my talk at the Drew University Medicinal Chemistry course, and it got me to thinking about when I was there (1990 or 1991), and my early days in medicinal chemistry in general. There are a lot of things that have to be learned when coming out of a synthetic organic chemistry background, and a few that have to be unlearned. I've written about some of these in the past, but I wanted to bring together some specific examples:
1. I had to appreciate just how strange and powerful metabolizing enzymes are. I approached them from the standpoint of an organic chemist, but p450 enzymes can epoxidize benzene, and I don't know any organic chemists that can do that too well. Ripping open piperazine rings, turning cyclohexanes into cyclohexanols - there are a lot of reactions that are common in metabolic clearance that are not, to put it lightly, part of the repetoire of synthetic organic chemistry.
2. I also had to learn a rough version of the Lipinski rules - basically, that physical properties matter, although the degree to which they matter can vary. You can't increase molecular weight or lipophilicity forever without paying for it. Small polar molecules are handled fundamentally differently than big greasy ones in vivo. This was part of learning that there are many, many different potential fates for small molecules when dosed into a living animal.
3. Another key realization, which took a while to sink in, was that biological assays had error bars, and that this was true whether or not error bars were presented on the page or the screen. Enzyme assays were a bit fuzzy compared to the numbers I was used to as a chemist, but cell assays were fuzzier. And whole-animal numbers covered an even wider range. I had to understand that this hierarchy was the general rule, and that there was not a lot to be done about it in most cases (except, importantly, to never forget that it was there).
4. As someone mentioned in the comments here the other day, alluding to an old post of mine, I had to learn that although I'd been hearing for years that time was money, that grad school had been a poor preparation for learning how true that was. I was used to making everything that I could rather than buying it, but I had to reverse that thinking completely, since I was being paid to use my head more than my hands. (That didn't mean that I shouldn't use my hands, far from it - only that I should use my head first whenever feasible).
5. I also had to figure out how to use my time more efficiently. Another bad grad school habit was the working all hours of the day routine, which tended to make things stretch out. Back then, if I didn't get that reaction set up in the afternoon, well, I was coming back that evening, so I could do it then. But if I was going to keep more regular working hours, I had to plan things out better to make the best use of my time.
6. There were several big lessons to be learned about where chemistry fit into the whole drug discovery effort. One was that if I made dirty compounds, only dirty results could be expected from them. As mentioned above, even clean submissions gave alarmingly variable results sometimes; what could be expected from compounds with large and variable impurities from prep to prep? One of my jobs was not to make things harder than they already were.
7. A second big lesson, perhaps the biggest, was that chemistry was (and is) a means to an end in drug discovery. The end, of course, is a compound that's therapeutically useful enough that people are willing to pay money for it. Without one or more of those, you are sunk. It follows, first, that anything that does not bear on the problems of producing them has to be considered secondary - not unimportant, perhaps, but secondary to the biggest issue. Without enough compounds to sell, everything else that might look so pressing will, in fact, go away - as will you.
8. The next corollary is that while synthetic organic chemistry is a very useful way to produce such compounds, it is not necessarily the only way. Biologics are an immediate exception, of course, but there are more subtle ones. One of the trickier lessons a new medicinal chemist has to learn is that the enzymes and receptors, the cells and the rats, none of them are impressed by your chemical skills and your knowledge of the literature. They do not care if the latest compound was made by the most elegant application of the latest synthetic art, or by the nastiest low-yielding grunt reaction. What matters is how good that compound might be as a drug candidate, and the chemistry used to make it usually (and should) get in line behind many more important considerations. "Quickly", "easily", and "reproducibly", in this business, roughly elbow aside the more academic chemical virtues of "complexly", "unusually", and "with difficulty".
+ TrackBacks (0) | Category: Academia (vs. Industry) | How To Get a Pharma Job | Pharma 101
June 6, 2012
I'm writing this from Logan Airport, on my way down to New Jersey, where later this afternoon I'll be giving a talk at the Drew University medicinal chemistry school. I took that course back in 1990, so it's rather odd to be coming back as the keynote speaker. Thinking about some of the people who were there with me, I can see that quite a few of them are no longer in the industry (although everyone that I know about on that list has still done fine for themselves and their families).
I suspect that many readers here may have been through the Drew course as well, especially if they started out with one of the big NJ companies back when. I'll try to give everyone an interesting talk!
+ TrackBacks (0) | Category: Blog Housekeeping
Slate has one of those assume-the-conclusions articles up on science and technology education in the US. It's right there in the title: "America Needs More Scientists and Engineers".
Now, I can generally agree that America (and the world) needs more science and engineering. I'd personally like enough to realize room-temperature superconductors, commercially feasible fixation of carbon dioxide as an industrial feedstock, and both economically viable fusion power and high-efficiency solar beamed down from orbit. For starters. We most definitely need better technology and more scientific understanding to realize these things, since none of them (as far as we know) are at all impossible, and we sure don't have any of them yet.
But to automatically assume that we need lots more scientists and engineers to do that is a tempting, but illogical, conclusion. And one that my currently-unemployed readers who are scientists and engineers don't enjoy hearing about very much, I'd have to assume. I think that the initial fallacies are (1) lumping together all science education into a common substance, and (2) assuming that if you put more of that into the hopper, more good stuff will come out the other end. If I had to pick one line from the article that I disagree with the most, it would be this one:
America needs Thomas Edisons and Craig Venters, but it really needs a lot more good scientists, more competent scientists, even more mediocre scientists.
No. I hate to be the one to say it, but mediocre scientists are, in fact, in long supply. Access to them is not a rate-limiting step. Not all the unemployed science and technology folks out there are mediocre - not by a long shot (I've seen the CVs that come in) - but a lot of the mediocre ones are finding themselves unemployed, and they're searching an awful long time for new positions when that happens. Who, exactly, would be clamoring to hire a fresh horde of I-guess-they'll-do science graduates? Is that what we really need to put things over the top, technologically - more foot soldiers?
But I agree with the first part of the quoted statement, although different names might have come to my mind. My emphasis would be on "How do we get the smartest and most motivated people to go into science again?". Or perhaps "How do we educate future discoverers to live up to their potential?" I want to make sure that we don't miss the next John von Neumann or Claude Shannon, or that they don't decide to go off to the hedge fund business instead. I want to be able to find the great people who come out of obscurity, the Barbara McClintocks and Francis Cricks, and give them the chance to do what they're capable of. When someone seems to be born for a particular field, like R. B. Woodward for organic chemistry, I want them to have every chance to find their calling.
But even below that household-name level, there's a larger group of very intelligent, very inventive people who are mostly only known to those in their field. I have a list in my head right now for chemistry; so do you. These people we cannot have enough of, either - these are people who might be only a chance encounter or sudden thought away from a line of research that would lead to an uncontested Nobel Prize or billion-dollar industrial breakthrough.
To be fair, Slate may well get around to some of these thoughts; they're going to be writing about science education all month. But I wish that they hadn't gotten off on this particular foot. You've got to guard yourself against myths in this area. Here come a few of them:
1. Companies, in most cases, are not moving R&D operations overseas because they just can't find anyone here to do the jobs. They're doing that because it's cheaper that way (or appears to be; the jury's probably still out in many instances).
2. We are not, as far as I can see, facing the constant and well-known "critical shortage of scientists and engineers". There have been headlines with that phrase in them for decades, and I wish people would think about that before writing another one. Some fields may have shortages, but that's a different story entirely.
3. And that brings up another point, as mentioned above: while the earlier stages of science and math education are a common pathway, things then branch out, and how. Saying that there are so-many-thousand "science PhDs" is a pretty useless statistic, because by that point, they're scattered into all sorts of fields. A semiconductor firm will not be hiring me, for example.
There are more of these myths; examples are welcome in the comments. I'll no doubt return to this topic as more articles are published on it - it really is an important one. That's why it deserves more than "America needs more mediocre scientists". Sheesh.
+ TrackBacks (0) | Category: General Scientific News | Who Discovers and Why
June 5, 2012
Via Pharmalot, it appears that a former WuXi employee helped himself to samples of two Merck Phase II clinical candidates that were under evaluation. The samples were then offered for sale.
Here's a link to a Google Translate version of a Chinese news report. It looks like gram quantities were involved, along with NMR spectra, with the compounds being provided to a middleman. It's not clear who bought them from him, but the article gives the impression that someone did, was satisfied with the transaction, and wanted more. But in the meantime, Merck did pick up on an offer made by this middleman to sell one of the compounds online, and immediately went after him, which unraveled the whole scheme. (The machine translation is pretty rocky, but I did appreciate that an idiom came through: it mentions that having these valuable samples in an unlocked cabinet was like putting fish in front of a cat).
I would think that this kind of thing is just the nightmare that WuXi's management fears - and if it isn't, it should be. The cost advantage to doing business with them (and other offshore contract houses) is still real, but not as large as it used to be. Stories like this can close that price gap pretty quickly.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | The Dark Side
Via Curious Wavefunction comes the news that Rosie Redfield and her lab have their paper coming out in Science refuting the "arsenic bacteria" results. It should be out on the journal's web site shortly, but is available at Arxiv beforehand.
I've been following Redfield's blogged results over the last few months, on and off, so the conclusions of this manuscript will not come as a surprise. She has been unable - completely unable - to substantiate the original claims of arsenate-driven growth and incorporation into biomolecules. Given the extraordinary nature of the original paper, the ball is now back in Wolfe-Simon et al.'s court. The default setting is that claims like those probably aren't real, and they need to be able to stand up to solid scrutiny.
I'll be very interested to see how this plays out. The authors of the original paper have been quite firm about only responding to criticism that's appeared in the official scientific literature, and have made remarks about how they're not going to deal with "website experiments" until they're published. Well, published they are, and in the same big journal as the original paper. What now?
I think their only hope is to advance specific, testable reasons why Redfield's results are incorrect. If it gets down to "Well, we get these results, and we don't see why you don't", and never advances from there, then the amazing results are almost certainly wrong. The world as we know it wins the tiebreakers in science.
+ TrackBacks (0) | Category: Life As We (Don't) Know It
June 4, 2012
A recent article in Science illustrates a number of points about drug development and scale-up. It's about artemisinin, the antimalarial. Peter Seeberger, a German professor of chemistry (Max Planck-Potsdam), has worked out what looks like a good set of conditions for a key synthetic step (dihydroartemisinic acid to artemisinin), and would like to see these used on large scale to bring the cost of the drug down.
That sounds like a reasonably simple story, but it isn't. Here are a few of the complications:
But Seeberger's method has yet to prove its mettle. It needs to be scaled up, and he can't say how much prices would come down if it worked. Using it in a large facility would require a massive investment, and so far, nobody has stepped up to the plate. What's more, pharma giant Sanofi will open a brand-new facility later this year to make artemisinin therapies based on Amyris's technology: yeast cells that produce a precursor of the drug. Although Seeberger says his discovery would complement that process, Sanofi says it's too late now to adopt it.
The usual route has been to extract arteminisin from its source, Artemisia annua. That's been quite a boom-and-bust cycle over the years, and the price has never really been steady (or particularly low, either). Amyris worked for some years to engineer yeast to produce artemisinic acid, which can then be extracted and converted into the final drug, and this is what's now being scaled up with Sanofi-Aventis.
That process also uses a photochemical oxidation, but in batch mode. I'm a big fan of flow chemistry, and I've done some flow photochemistry myself, and I can agree that when it's optimized, it can be a great improvement over such batch conditions. Seeberger's method looks promising, but Sanofi isn't ready to retool to use it when they have their current conditions worked out. Things seem to be at an impass:
But what will happen with Seeberger's discovery is still unclear. Sanofi's plant is about to open, and the company isn't going to bet on an entirely new technique that has yet to prove that it can be scaled up. In an e-mail to Science, the company calls Seeberger's solution “a clever approach,” but says that “so far the competitivity of this technique has not been demonstrated.”
The ideal solution would be if other companies adopt the combination of Amyris's yeast cells and Seeberger's method, [Michigan supply-chain expert] Yadav says; “then, the price for the drugs could go down significantly.” But a spokesperson for OneWorld Health, the nonprofit pharmaceutical company that has backed Sanofi's project, says there are no plans to make the yeast cells available to any other party.
Seeberger himself is trying to make something happen:
On 19 April, Seeberger invited interested parties to a meeting in Berlin to explore the options. They included representatives of Artemisia growers and extractors, pharmaceutical companies GlaxoSmithKline and Boehringer Ingelheim, as well as the Clinton Foundation, UNITAID, and the German Agency for International Cooperation. (The Bill and Melinda Gates Foundation canceled at the last minute.) None of the funders wanted to discuss the meeting with Science. Seeberger says he was asked many critical questions—“But then the next day, my phone did not stop ringing.” He is now in discussions with several interested parties, he says.
As I say, I like his chemistry. But I can sympathize with the Sanofi people as well. Retooling a working production route is not something you undertake lightly, and the Seeberger chemistry will doubtless need some engineering along the way to reach its potential. The best solution seems to me to be basically what's happening: Sanofi cranks out the drug using its current process, which should help a great deal with the supply in the short term. Meanwhile, Seeberger tries to get his process ready for the big time, with the help of an industrial partner. I wish him luck, and I hope things don't stall out along the way. More on all this as it develops over the next few months.
+ TrackBacks (0) | Category: Drug Development
Over at Xconomy, Luke Timmerman asks why any biopharma company would go to the trouble and expense of changing its name. There are several reasons (such as having chosen a lousy name to begin with), but he's right that most company names don't mean much before or after a change.
He also has a poll of some name changes, asking if they were upgrades or not. The first on his list is my nomination for the Worst Company Name: AbbVie, which is what Abbott decided to call its pharma business as it spins that out on its own. I just can't say enough bad things about that one -it's meaningless, for starters, and that double "b" looks like a misprint. The b/v consonant combination doesn't exactly roll off the tongue; the "Vie" is silly for a company not based in France (or at least selling something that's supposed to be French), and I've never been a fan of InterCapitalization. Other than that, I guess it's fine.
So here's a quick question: what's the biotech/pharma company out there with the worst name - well, other than AbbVie? Is there anyone who can beat them? Boring doesn't count. We're looking for actually harmful. Nominees?
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
June 1, 2012
Well, that's because things can't go on the way that they have been, as this Reuters piece makes clear:
For several years, AstraZeneca kept investors happy with a strategy of hefty dividend payments and share buybacks, but more recently key shareholders have grown restive about its failure to develop promising new medicines.
The group has suffered repeated drug development setbacks, stoking fears about its long-term prospects given a complete reliance on prescription medicines at a time when rivals have diversified.
I'm not so sure about how well their rivals have done with that diversification, if it even exists in many cases. AstraZeneca has discovered too few new drugs and spent far too much money doing so, and no amount of share buyback programs can help that. There are apparently two contradictory sets of recommendations: that AZ should aggressively buy someone, perhaps several companies, in an effort to make up for its own failures. But the other camp says that the company should shrink down to something viable, and thus make itself an attractive target for someone else to buy.
Both of those are, needless to say, business recommendations from Wall Street analysts. As such, they're answering the question "If I owned a lot of AstraZeneca stock, what would I want to the company to do to keep me from losing even more money?" (Never mind that the answer, in some of these cases, is "Don't own that stock; cut and run"). Another question, asked from a different perspective, is "What should use should be made of AstraZeneca's vast drug discovery and development resources? How can we make something different happen than what's been happening for the last ten years?" In other words, "How can this company discover more drugs?"
Those two viewpoints intersect if you believe that discovering more drugs would lead to a more profitable company. And that would follow, except for the nasty lead time of ten or twelve years. If someone at AZ waved a magic wand this afternoon and caused a host of future clinical successes to be made in the labs, and also magically caused their future development to go as smoothly as possible, then the company's bottom line would show the effects in. . .what, 2022 or so? Given that, the shorter-term Wall Street solutions will have a free hand. There isn't time for science to rescue this situation.
+ TrackBacks (0) | Category: Business and Markets
I do hate to bring up rhodanines again, but I'm not the one who keeps making the things. This paper from ACS Medicinal Chemistry Letters turns out dozens of the things as potential inhibitors of the cellular protein dynamin, in what a colleague of mine referred to as a "nice exploration of the rhodanome".
He did not say it with a straight face. But this paper does: "The rhodanine core is a privileged scaffold in medicinal chemistry and one that has found promise among many therapeutic applications." Well, that's one way to look at it. Another viewpoint is that rhodanines are "polluting the scientific literature" and that they should "be considered very critically" no matter what activity they show in your assay.
The usual answer to this is that these aren't drugs, they're tool compounds. But I don't think that these structures even make safe tools; they have the potential to do too many other things in cell assays. But if people are going to go ahead and use them, I wish that they'd at least make a nod in that direction, instead of mentioning, in passing, how great the whole class is. And yes, I know that they cite two papers to that effect, but one of those two mainly just references the other one when it comes to rhodanines. My viewpoint is more like this paper's:
Academic drug discovery is being accompanied by a plethora of publications that report screening hits as good starting points for drug discovery or as useful tool compounds, whereas in many cases this is not so. These compounds may be protein-reactive but can also interfere in bioassays via a number of other means, and it can be very hard to prove early on that they represent false starts. . .
And I endorse this view as well:
. . .Barriers to adoption of best practices for some academic drug-discovery researchers include knowledge gaps and infrastructure deficiencies, but they also arise from fundamental differences in how academic research is structured and how success is measured. Academic drug discovery should not seek to become identical to commercial pharmaceutical research, but we can do a better job of assessing and communicating the true potential of the drug leads we publish, thereby reducing the wastage of resources on nonviable compounds.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays