Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
You'll have seen the news about the FDA safety warning on statins. The agency is warning that instances of hyperglycemia have occurred with statin use, as well as memory loss and confusion.
I'm really not sure what to make of this. On the one hand, these drugs have been through many, many large clinical trials under controlled conditions, and they've been taken by a huge numbers of patients out in the real world. So you might think that if these effects were robust, that they might have been noticed before now. But there are side effects that are below the threshold of even the largest clinical trials, and a patient population the size of the one taking these drugs is just where you might be able to see such things.
I lean towards the latter, and if that's true, then the agency's statement is appropriate. If these could be real effects in some patients, then it's worth keeping an eye out for them. One problem, though, is that hyperglycemia is rather more sturdy. You can measure it, and people don't really feel it when they have it. Memory loss and confusion are fuzzier, but they're immediately felt, so they're subject to more post hoc ergo propter hoc judgments. It's possible that more people will stop taking statins because of that part of the warning to cancel out the public health good that it might do otherwise.
The title of this one says it all: "Association of industry funding with the outcome and quality of randomized controlled trials of drug therapy for rheumatoid arthritis". Any number of critics of the drug business will tell you what that association is: we publish the good stuff and bury the bad news, right?
Well, not so much in arthritis, apparently. The authors identified 103 recent clinical trials in the area, over half of them industry-funded. But when it came to outcomes, things were pretty much the same. Trials from the three largest classes of funding (industry, nonprofit, and "unspecified") all tended to strongly favor the tested drug, although the small number (six) of mixed-funding trials ended up with two favoring and four against. The industry-run trials tended to have more subjects, while the nonprofit ones tended to run longer. The industrial trials also tended to have a more complete description of their intent-to-treat and workflow. As you'd figure, the industrial trials tended to be on newer agents, while the others tended to investigate different combinations or treatment regimens with older ones. But the take-home is this:
No association between funding source and the study outcome was found after adjustment for the type of study drug used, number of study center, study phase, number of study subject, or journal impact factor. . .
. . .Though preponderance of data in medical literature shows that industry funding leads to higher chances of pro-industry results and conclusions, we did not find any association between the funding source and the study outcome of "published" (randomized clinical trials) of RA drug therapies.
The one worrying thing they did find was a trend towards publication bias - the industry-sponsored studies showed up less often in the literature. The authors speculate as to whether these were trials with less favorable outcomes, but didn't have enough data to say one way or another. . .
The sponsors of the Research Works Act (Representatives Carolyn Maloney, D-N.Y., and Darrell Issa, R-California) have announced that they will not be bringing it forward. Elsevier's backtrack was indeed the sign.
Update: for the non-chemists in the audience who are wondering why one doesn't stroll in as advertised, check out what happens when you deal with the nastier end of fluorine chemistry. This new chemistry isn't anything like those examples - thank goodness - but it'll give you some idea of why we respect and fear the fluorine.
C&E News has an article on some of the recent fluorination methods that have been appearing in the literature. (Some of these have come up on this site here, here, and here).
These methods are all quite interesting (I've tried some of them out myself, with success), but what I also found interesting was the sociological angle that the article brought in. Organofluorine chemistry has not, over the years, been the sort of thing that one takes up lightly, for a lot of good reasons. Some of the real advances in the field have come from making it more accessible to more chemists. Very few people will use elemental fluorine other than at near-gunpoint, and some of the other classic reagents are still quite unfriendly, tending to leave cursing chemists swearing never to touch them again.
But making the field more open makes it, well, more open. And some of the people who've been there a while aren't quite sure what to make of the newcomers. They don't always cite the literature in appropriate depth, which is a real concern, and there can be a general feeling that they haven't paid their fluorine dues. (But the whole point is to keep people from paying those in the first place).
Since I'm not having to make my reputation discovering fluorination conditions, though, I'm just happy to deal with the results of all this work, both from the hardy pioneers as well as from the flashy new immigrants. These are useful reactions, and the rest of us are glad to have 'em.
I last wrote about the Molecular Libraries program here, as it was threatened with funding cuts. Now there's a good roundup of opinion on it here, at the SLAS. The author has looked over the thoughts of the readership here, and also heard from several other relevant figures. Chris Lipinski echoes what several commenters here had to say:
Lipinski notes that when the screening library collection began the NIH had little medicinal chemistry experience. "I was a member of an early teleconference to discuss what types of compounds should be acquired by the NIH for high-throughput screening (HTS) to discover chemical biology tools and probes. Our teleconference group was about evenly split between industry people and academics. The academics talked about innovation, thinking out of the box, maximum chemical diversity and not being limited by preconceived rules and filters. The industry people talked about pragmatism, the lessons learned and about worthless compounds that could appear active in HTS screens. The NIH was faced with two irreconcilable viewpoints. They had to pick one and they chose the academic viewpoint."
He says that they later moved away from this, with more success, but implies that quite a bit of time was lost before this happened. Now, we waste plenty of time and money in the drug industry, so I have no standing to get upset with the NIH about blind alleys, in principle. But having them waste time and money specifically on something that the drug industry could have warned them off of is another thing.
In the end, opinions divide (pretty much as you'd guess) on the worth of the whole initiative. As that link shows, its director believes it to have been a great success, while others give it more mixed reviews. Its worth has surely grown with time, though, as some earlier mistakes were corrected, and that's what seems to be worrying people: that the plug is getting pulled just when things were becoming more useful. It seems certain that several of the screening centers will not survive in the current funding environment. And what happens to their compounds then?
Courtesy of C&E News, here's an interesting look inside the Chinese labs of HEC Pharm, a company making APIs and generics. The facilities look good. I have to say, that's an awful lot of HPLC capacity, starting at 0:41.
The idea of company housing, though, is a bit harder to get used to. . .
It may well be. This morning comes news that Elsevier has dropped support for the RWA, which makes one think that they're feeling the pressure:
We have heard expressions of support from publishers and scholarly societies for the principle behind the legislation. However, we have also heard from some Elsevier journal authors, editors and reviewers who were concerned that the Act seemed inconsistent with Elsevier’s long-standing support for expanding options for free and low-cost public access to scholarly literature. That was certainly not our intention in supporting it. . .
While we continue to oppose government mandates in this area, Elsevier is withdrawing support for the Research Work Act itself. We hope this will address some of the concerns expressed and help create a less heated and more productive climate for our ongoing discussions with research funders. . .
You can smell the smoke from the brake pads, and hear the reverse gears engaging. Maybe now the American Chemical Society will make a public statement - I haven't heard anything from them yet, although their default position (as a member of the AAP) is to support it. (They supported a previous version of the bill in 2008).
Whoever's behind the Journal of Apocryphal Chemistry is trying to do everyone a good deed before we get into allergy season. After detailing the ever-more-stringent controls on the sale of pseudephedrine, they propose a synthetic route based on a more readily available starting material: methamphetamine.
A quick search of several neighborhoods of the United States revealed that while pseudephedrine is difficult to obtain, N-methylmethamphetamine can be procured at almost any time on short notice and in quantities sufficient for synthesis of useful amounts of the desired material. Moreover, according to government statistics, N-methylmethamphetamine is becoming an increasingly attractive starting material for pseudephedrine, as the availability of N-methylmethamphetamine has remained high while prices have dropped and purity has increased. We present here a convenient series of transformations using reagents which can be found in most well stocked organic chemistry laboratories. . .
Their route, based on a 1985 paper in J. Chem. Soc. Chem. Comm., is not exactly trailer-park chemistry, though. (I note that they have the reference a bit wrong as well; there was no plain J. Chem. Soc. in 1985). It involves a chromium carbonyl complex of the aryl ring, formation of a chiral lithium dianion, and oxidation of that with MoOPH, which would give you pseudephedrine after decomplexation. There's no way to tell if these reactions have actually been run, of course. Based on the literature precedent, it might work, although I'd be worried about maintaining the chirality of the dianion. (For what it's worth, the authors are also aware of this problem, and claim that the selectivity was unaffected).
Their larger point stands. I look forward to seeing more from this paper's authors, O. Hai and I. B. Hakkenshit. I see less interesting stuff in my RSS feed every day of the week.
Alex Tabarrok has an interesting post on the idea of patent protection for "independent invention". This would be for cases when two people or organizations independently arrive at the same thing:
In the minds of the public someone who infringes a patent is like a plagiarist or a thief–the infringer has copied someone else’s work or, even worse, stolen their intellectual property. In reality, patent infringement has very little to do with copying or theft. Here’s how I described what is probably closer to the paradigmatic case of patent infringement in "Launching the Innovation Renaissance":
'Two inventors, Kelly and Pat, work independently, neither aware of the other’s existence. Kelly patents first. Under the present law, if Pat wants to sell or even use his own invention, he must pay Kelly a license fee (!) even though Pat’s idea came from his own head and no other.'
If independent invention were uncommon this type of case wouldn’t be important but independent invention is very common. Classic cases include Newton and Leibniz with the calculus, Alexander Graham Bell, Elisha Gray and Johann Philipp Reis with the telephone, Ohain, Campini, and Whittle with the jet engine and so on. And if independent invention is common with great discoveries and inventions then it is surely much more common with ordinary innovations. As a result, it’s not surprising that most patent cases don’t even allege copying.
He proposes that "independent invention" be an available defense for claims of infringement. I agree in principle, but I worry that it would turn into just another way for people with the legal resources to tie up the system until their opposition gives in.
How would such a system affect drug discovery? Since we tend to spend a lot of time making sure that our molecules really are legally differentiable from the competition, I think that this would be less of an issue for us. But it's certainly true that some cases would arise. I personally have worked on a series of compounds (some years ago) that turned out to be the exact same series that a competitor was working on. The patents applications were filed within a couple of weeks of each other, and there were many compounds that overlapped. There are some areas where an independent invention defense could come in very handy (or be a major pain, depending on your relationship to the sharp end).
Stuart Cantrill has a post on one of those vast dendrimer structures - you know, those mandala-like things that weigh as much as a beer truck. He says that if you can draw the structure on his page in ChemDraw (or the like) in under three hours, you are clearly a wonder-worker.
He's asking on his Twitter feed for examples of the worst chemical structure anyone's had to draw, so I thought I'd throw the same question out to the crowd. You're going to have had to have lead an evil past life to be able to beat his dendrimer, though.
When I mentioned former FDA commissioner Andy Eschenbach the other day, I alluded to some other things about his approach that have bothered me. I thought I should follow up on that, because he's definitely not the only one. You may or may not remember this business from 2003, where Eschenbach wanted to set a goal for the National Cancer Institute to "eliminate death and suffering" from cancer by 2015. Here's what Science had to say at the time:
The nation's cancer chief, National Cancer Institute (NCI) director Andrew von Eschenbach, has announced a startling new goal in the battle against cancer. His institute intends to “eliminate death and suffering” from the disease by 2015. The cancer research community is abuzz over the announcement. Some say that however well intended, the goal is clearly impossible to reach and will undermine the director's credibility.
Von Eschenbach, who has headed the $4.6 billion NCI for a year, announced the 2015 target on 11 February to his National Cancer Advisory Board. He told board members that he did “not say that we could eliminate cancer.” Rather, he continued, his goal is to “eliminate suffering and death due to this disease.” NCI is working on a strategy to do that by discovering “all the relevant mechanisms” of cancer, developing interventions, and getting treatments to patients.
We have three years to go on that deadline, and it's safe to say that we're not going to make it. And that's not because we failed to follow Eschenbach's plan, because saying that you're going to figure out everything is not a plan.
Now, I'm actually kind of an optimistic person, or so I'm told. But I'm not optimistic enough to think that we can eliminate deaths from cancer any time soon, because, well, because I've worked on drugs that have attempted to do so. As has been detailed several times here (and many times elsewhere), cancer isn't one disease. It's a constellation of thousands of diseases, all of which end up by showing uncontrolled cell growth. Calling cancer a disease is like calling headache a disease.
But I'm operating on a different time scale from Eschenbach. Here he is in 2006, in The Lancet:
“Think of it”, von Eschenbach says, “for thousands of years we have dealt with cancer working only with what we could see with our eyes and feel with our fingers, then for a 100 years we've dealt with cancer with what we could see under a microscope. Now, we have gone in 10 years to a completely different level.” This new science “is going to change how we think, it's going to change how we approach things; it's going to change everything.”
. . .He points to the example of testicular cancer. The development of treatments for this cancer was a great success, von Eschenbach says, but one that “took decades of trial and error, one trial after another, after another, after another”. That hit-and-miss approach is no longer necessary, von Eschenbach says. Now, if 10% of patients responded to a treatment, he says, “you take the tools of genomics and go back, reverse engineer it, and ask: what was different about that 10%? Well, they had an EGF [epidermal growth factor] receptor mutation, ah ha!”
Ah ha, indeed. Here's more in a similar vein. The thing is, I don't disagree with this in principle. I disagree on the scale. No one, I think, knows how to eliminate deaths from cancer other than the way we're doing it now: detailed investigation of all sorts of cancers, all sorts of cellular pathways, and all sorts of therapies directed at them. Which is all a lot of work, and takes a lot of time (and a lot of money, too, of course). It also leads to a huge array of dead ends, disappointments, and a seemingly endless supply of "Hmm, that was more complicated than we thought" moments. I don't see that changing any time soon. I'm optimistic enough to think that there is a bottom to this ocean, that it's of finite size and everything in it is, in principle, comprehensible. But it's big. It's really, really big.
There are people who defend goal statements like Eschenbach's. Such things force us to aim high, they say, they focus attention on the problem and give us a sense of urgency. Taken too far, though, this point of view leads to the fallacy that what's important is to care a lot - or perhaps to be seen to care a lot. But the physical world doesn't care if we care. It yields up its secrets to those who are smart and persistent, not to the people with the best slogans.
I mentioned the fearsome memoirs of Max Gergel here, but not many people know that he wrote another volume. "The Ageless Gergel", out of print for who knows how long, is available here in PDF form. I have to note that it's even more rambling and formless than "Excuse Me Sir, Would You Like to Buy a Bottle of Isopropyl Bromide", and one also gets the impression that he used up a lot of his show-stopping anecdotes in the first book as well. I should also mention that the entire last section of the book is an account of a European vacation, during which no chemistry intrudes, and that the whole thing ends as if Gergel suddenly looked at his watch.
But there are some interesting chemical stories buried in there, and it's worth skipping through some parts to find them. This one's pretty typical:
I had been visiting Will at the plant in Elgin, South Carolina, and noticed that he smelled goaty. For that matter, the other workers seemed to have a goaty odor, too. I inquired the reason, and he took me to the source, an isolated section of the plant, which smelled horrendous. A large glass still, one that would have delighted a moonshiner in the old whiskey-making days was stinking up Hardwicke Chemical Co. and the surrounding farms. Now fatty acids have a rank odor smelling like rancid butter. The absolute worst member of the series is isovaleric acid. This smells like rancid butter with a soupgon of goat and old sneakers thrown in for good measure. As bad as it smells, the acid chloride derived from it is worse. It is so volatile that it will chase a visitor and leave its far from subtle mark. The odor is soap, water and Lysol resistant. This acid chloride reacts with mucous membrane so that while you are rendered ill by the obnoxious odor, the acid chloride is hydrolyzing with your perspiration as a reactant and eats away your lips, eyeballs and tongue. Hardwicke, committed to make this monster, was only too happy to find' Columbia Organic Chemicals Co., Inc., as a "farmout" and once more we were making something no one else wanted to make.
We had never had such a dreadful assignment. Anyone working with this "superstink" is branded and given a wide berth. No matter how amorous his spouse may be, passion crumples despite baths, Chlorox and Dentine. For a while we made isovaleroyl chloride at Cedar Terrace. It created pandemonium among residents who first sniffed each other, came to the plant to sniff us, and then sniffled to their lawyers.
Unfortunately, I can't quite put that acid chloride on my list of things I won't work with, because I have worked with it. But I can imagine that making it by the barrel would be a pretty repellent business, for sure. A 25-gram bottle was enough for me.
I wrote here about a very unusual dinitro compound that's in the clinic in oncology. Now there's a synthetic chemistry follow-up, in the form of a paper in Organic Process R&D.
It's safe to say that most process and scale-up chemists are never going to have to worry about making a gem-dinitroazetidine - or, for that matter, a gem-dinitroanything. But the issues involved are the same ones that come up over and over again. See if this rings any bells:
Gram quantities of (3) for initial anticancer screening were originally prepared by an unoptimized approach that was not suitable for scale-up and failed to address specific hazards of the reaction intermediates and coproducts. The success of (3) in preclinical studies prompted the need for a safe, reliable, and scalable synthesis to provide larger supplies of the active pharmaceutical ingredient (API) for further investigation and eventual clinical trials.
Yep, it's when you need large, reliable batches of something that the inadequacies of your chemistry really stand out. The kinds of chemistry that people like me do, back in the discovery labs, often has to be junked. It's fine for making 100mg of something to put in the archives - and tell me, when was the last time you put as much as 100 milligrams of a new compound into the archives? But there are usually plenty of weak points as you try to go to gram, then hundreds of grams, then kilos and up. Among them are:
(1) Exothermic chemistry. Excess heat is easy to shed from a 25-mL round-bottom flask. Heat is not so easily lost from larger vessels, though, and the number of chemists who have had to discover this the hard way is beyond counting. The world is very different when everything in the flask is no longer just 1 cm away from a cold glass wall.
(2) Stirring. This can be a pain even on the small scale, so imagine what a headache it is by the kilo. Gooey precipitates, thick milkshake-like reactions, lumps of crud - what's inconvenient when small can turn into a disaster later on, because poor stirring leads to localized heating (see above), incomplete reactions, side products, and more.
(3) Purification. Just run it down a column? Not so fast, chief. Where, exactly, do you find the columns to run kilos of material across? And the pumps to force the stuff through? And the wherewithal to dispose of all that solid-phase stuff once you've turned it all those colors and it can't be used again? And the time and money to evaporate all that solvent that you're using? No, the scale-up people will go a long way to avoid chromatography. Precipitations and crystallizations are the way to go, if at all possible.
Reproducibility. All of these factors influence this part. One of the most important things about a good chemical process is that it works the same flippin' way every single time. As has been said before around here, a route that generates 97% yield most of the time, but with an occasional mysterious 20% flop, is useless. Worse than useless. Squeezing the mystery out of the synthesis is the whole point of process chemistry: you want to know what the side products are, why they form, and how to control every variable.
Oh yeah. Cost.Cost-of-goods is rarely a deal-breaker in drug research, but that's partly because people are paying attention to it. In the med-chem labs, we think nothing of using exotic reagents that the single commercial supplier marks up to the sky. That will not fly on scale. Cutting out three steps with a reagent that isn't obtainable in quantity doesn't help the scale-up people one bit. (The good news is that some of these things turn out to be available when someone really wants them - the free market in action).
There are other factors, but those are some of the main ones. It's a different world, and it involves thinking about things that a discovery chemist just never thinks about. (Does your product tend to create a fine dust on handling? The sort that might fill a room and explode with static electricity sparks? Can your reaction mixture be pumped through a pipe as a slurry, or not? And so on.) It looks as if the dinitro compound has made it through this gauntlet successfully, but every day, there's someone at some drug company worrying about the next candidate.
Here's a huge review that goes over most everything you may have wanted to know about what's called "rational drug design". The authors are especially addressing selectivity, but that's a broad enough topic to cover all the important features. (If you can't access the paper, here's a key graphic from it).
"Rational", it should be understood, generally tends to mean "computationally modeled" in the world of drug discovery. And that's certainly how this review is pitched. I'm of two minds - at least - about the whole area (a personal bias that has made for some lively discussions over the years). Some of those discussions have taken place between my own ears as well, because I'm still not sure that all my opinions about computational drug design are self-consistent.
On the one hand, drug potency is a physical act which is mediated by physical laws. Computing the change in free energy during such a process should be feasible. But it turns out to be rather difficult - proteins flex and bonds rotate, water molecules assist and interfere, electrostatic charges help and hinder, hydrogen bonds are vital (and hard to model), and a dozen other sorts of interactions between clouds of electrons weigh in as well. Never forget, too, that free energy changes have an entropy component, and that's not trivial to model, either. I keep wondering if the error bars of the various assumptions and approximations don't end up swamping out the small changes that we're interested in predicting.
But, on that other hand, there are certainly cases where modeling has helped out a great deal. A cynic would say that we've been sure to hear about those, while the cases where it had no impact at all (or did actual harm) don't make the journals very often. It can't be denied, though, that modeling really has been (at times) the tool for the job. It would be interesting to know if the frequency of that happening has been increasing over time, as our tools get better.
Because on the third hand, it's been a poor bet to go against the relentless computational tide over the last few decades. You'd have to think that sheer computing power will end up making molecular modeling ever more capable and useful, as we learn more about what we're doing. Mind you, there were people back in the mid-1980s who thought we'd already reached that point. I'm not saying that they were the best-informed people at that time, but they certainly did exist. I wonder sometimes what it would have been like, to show people in 1985 what the state of rational drug design would be like in 2012. Would they be excited, or vaguely disappointed?
And then there's that word "rational". I think that its adoption might have been the best advertising that the field's ever achieved, because it makes everything else seem irrational (or at least arational) by default. I mean, do you just wanna make compounds, or do you want to think about what you're doing? I also wonder what might have changed if that phrase had never been adopted - perhaps expectations wouldn't have gotten out of hand in the computational field's early days, but it might not have received the attention (and money) that it did, either. . .
Via on Twitter (and that via C&E News), I bring you the definitive what-are-we-going-to-do-with-all-this-sodium video. The end of World War II brought all kinds of material disposal problems - you may have seen footage of virtually new airplanes being dumped into the sea and the like. Some of those disposal problems are still with us, like the unexploded ordnance that keeps turning up. But these barrels of sodium, no one ever had to worry about them again. . .
Via Sally Church on Twitter (and a post by Bethany Halford at C&E News), I bring you the definitive what-are-we-going-to-do-with-all-this-sodium video. The end of World War II brought all kinds of material disposal problems - you may have seen footage of virtually new airplanes being dumped into the sea and the like. Some of those disposal problems are still with us, like the unexploded ordnance that keeps turning up. But these barrels of sodium, no one ever had to worry about them again. . .
Here's the streaming video of the session I did at SLAS2012 on collaboration between academia and industry. I'm not sure how long it'll be up, so if you want to see it, you probably should go ahead and check it out. A lot of people probably wish they could fast-forward (and pause) me during regular working hours!
There's been a big drug development story over the last few months that I've been unable to comment on due to conflicts of interest. That situation continues, but I can point to the latest developments, for those who haven't been following the twists and turns.
Well, since I was just talking about a reagent that can potentially take off without warning, I wanted to solicit vivid experiences from the crowd. What's a compound that you've made that did something violently unexpected? I can recall making some para-methoxybenzyl chloride in grad school (for a protecting group; I was running out of orthogonal protecting groups by that time). It's not hard - take the benzyl alcohol and some conc. HCl and swoosh 'em around. But the product you get by that method isn't the cleanest thing in the world, and on storage, well. . .a vial of it blew out in my hood after the acid had had a chance to work on it.
My most vivid reagent-gone-bad story is probably this one; that's a time I literally came down counting fingers. What other things have you had turn on you?
I don't know how many of you out there like to form azides, but if you do, you've probably used (or thought about using) imidazole-1-sulfonyl azide hydrochloride. This reagent appeared in Organic Letters a few years ago as a safe-to-handle shelf-stable azide transfer reagent, and seems to have found popularity. (I've used it myself).
So it was with some alarm that I noted this new paper on the stability and handling characteristics of the reagent. It's a collaboration between the University of Western Australia (where the reagent was developed, partly by the guy whose lab bench I took over in grad school back in 1983, Bob Stick), the University of British Columbia, and the Klapötke group at Munich. That last bunch is known to readers of "Things I Won't Work With", as experts in energetic materials, and when I saw that name I knew I'd better read the paper pronto.
As it turns out, the hydrochloride isn't quite as optimal as thought. It's impact-sensitive, for one thing, and not shelf-stable. The new paper mentions that it decomposes with an odor of hydrazoic acid on storage - you don't want odors of hydrazoic acid, believe me - and I thought while reading that, "Hmm. My bottle of the stuff is white crystalline powder; that's strange." But then I realized that I hadn't looked at my bottle for a few months. And as if by magic, there it was, turning dark and gooey. I had the irrational thought that the act of reading this paper had suddenly turned my reagent into hazardous waste, but no, it's been doing that slowly on its own.
So if you have some of this reagent around, take care. The latest work suggests that the hydrogensulfate salt, and especially the fluoroborate, are less sensitive and more stable alternatives to the hydrochloride, and I guess I'll have to make some at some point. (They also made the perchlorate - just for the sake of science, y'know - and report, to no one's surprise, that it "should not be prepared by those without expertise in handling energetic materials"). But it needs no ghost come from the grave to tell us this.
So, back to my lab and my waste-disposal problem! And here's a note on the literature. We have the original prep of the reagent, a follow-up note on stability problems, and this latest paper on alternatives. But when you go back to the original paper, there is no mention of the later hazard information. Shouldn't there be a note, a link, or something? Why isn't there? Anyone at Organic Letters or the ACS care to comment on that?
Update: I've successfully opened my bottle, with tongs and behind a blast shield, just to be on the safe side, and defanged the stuff off by dilution.
Here's a YouTube look at a periodic table, laid out with high-quality samples of the real elements. I want one, although I'm willing to compromise on some of the radioactive items; completeness can be taken a bit too far.
Ex-Intel chief Andy Grove's idea to reform clinical trials didn't get much of a reception around here, although (in the end) I was more receptive to the idea than many people were (the comments to the posts here followed similar lines).
So it's quite interesting to see former FDA commissioner Andy Eschenbach making what sounds like a very similar pitch in the Wall Street Journal. It's near the end of an op-ed about reforming the FDA, and it goes like this:
Breakthrough technologies deserve a breakthrough in the wa the FDA evaluates them. Take regenerative medicine. If a company can grow cells that repair the retina in a lab, patients shouldn't have to wait years while the FDA asks the company to complete laborious clinical trials proving efficacy. Instead, after proof of concept and safety testing, the product could be approved for marketing with every eligible patient entered in a registry so the company and the FDA can establish efficacy through post-market studies.
There are several ways to look at that idea. One is to translate it into less editorial language and propose that "Patients (and their insurance companies) should be able to pay to try therapies before they're proven to have worked, as long as that proof is forthcoming". That's not prima facie a crazy idea, but it's subject to the same sorts of objections as Grove's earlier proposal. The post-marketing data will likely be of lower quality than a properly run clinical trial, and it will be harder to use it to establish efficacy. On the other hand, useful therapies would get into the hands of patients faster than happens now, and the expense of drug development would (presumably) go down. But useless therapies would also get into the hands of patients faster than happens now, too, and that's something that we're not currently equipped to deal with.
Any such scheme is going to have to deal with the legal aspect. People don't currently feel as if they're enrolled in a clinical trial when a new drug is offered for sale (although perhaps they should), and it's going to take some doing to make clear that an investigative therapy is just that. Will patients sue, or try to sue, if it doesn't work? If it goes further than that and causes actual harm? I'm thinking of Lilly's gamma-secretase inhibitor that actually seemed to make Alzheimer's worse - how do we handle things like that?
What about the insurers? Will they be happy to have the costs of a Phase III trial offloaded onto them? Not likely. There's also the question of what therapies will get to hop onto this conveyor belt: how much proof-of-concept will be needed? Will that be for the insurers to decide, what investigational drugs they're willing to pay for, so that data can be obtained?
And about that data - it would be of great importance to establish, up front, just what sort of endpoint is being sought. Clear criteria would need to be established (both positive and negative) so that a regulatory decision could be reached in a reasonable time frame. Otherwise, I fear that there are any number of entrepreneurial types who would gladly stretch things out, as long as someone else is paying, in the hopes of finally seeing something useful. No one will - or should - pay for extending fishing expeditions.
Even after all these objections, I can still see some merit in the whole idea. But the question is, after you take all the objections into account (and there are surely more), how much merit is left over? It's not as clear-cut a case as Eschenbach (or Grove) would have a person believe. . .
Some early reactions to Eschenbach's proposal are here and here. There are, I should note, a few other aspects to his op-ed that will be subject of another post.
There's been a movement afoot to boycott Elsevier journals. It's started over in the mathematics community, led by Timothy Gowers, a serious mathematician indeed. The objections to Elsevier are the ones you'd think: high prices, unsplittable bundles of journal subscriptions for institutions, and their strong support for legislation like the Research Works Act.
Writing about this is tricky, since I'm on the editorial board of an ACS journal that competes with Elsevier titles. Of course, as that link in the first paragraph shows, Nature Publishing Group has no problem talking about the issue themselves, and they're competing tooth and claw with Elsevier. At any rate, there's now a central website for the boycott movement, and it continues to gain publicity. There are, of course, some field where Elsevier is more prominent than others - biomedically, the Cell Press journals (and The Lancet) are heavy hitters, so a real test of this movement will be to see how many people from these fields it can attract.
Personally, I think that the current system of scientific publishing is increasingly outmoded, although I'm loath to forecast what will replace it. But we could be looking at another step in its demise.
Looking through the literature this morning, I thought about another technique that, although you see it published on, no organic chemist I know has ever actually used: electrochemistry. There are all sorts of odd reactions that can apparently be made to go at electrode surfaces, but what synthetic organic chemist has ever run one, besides someone in a group that concentrates on publishing papers on electrochemical reactions? Since a few inconclusive cyclic voltammetry scans in 1984, I sure haven't.
That's more harsh-sounding than I intended. I definitely don't think that the technique is useless, but it surely doesn't get used much. One problem is that there are so many different conditions - solvents, electrolytes, electrode materials, voltage/current regimens. If you've never done the stuff before, it's hard to know where to start. And that leads to the next problem, which is that so much of the equipment in the field has been home-made. That makes the activation barrier to trying it yourself that much higher: do you want to do this reaction enough to want to build your own apparatus and troubleshoot it? Or do you have something else to do? If someone sold a standard electrochemistry kit (controller box to run different conditions, set of different electrode materials, etc.), that would free some people up to find out what it could do for them, rather than wondering if they've built a decent setup.
Then there's the scale-up problem. When you're working at a surface to do your chemistry, that's always going to be a concern. What's the throughput? Enough to meet your needs? And if not, how exactly are you going to increase it, without having to rebuild the whole apparatus? There's probably a way to integrate flow chemistry with electrochemistry, which might solve that problem. But that mixture is, as yet, still in the realm of a few dedicated tinkerers - which is what one could say, sometimes, of the whole electrochemical field.
I'm hearing stories that there was a layoff (yet again) at Pfizer, this time affecting the Cambridge researchers. Word is that they got the word over the weekend, which seems rather unusual - anyone have any more details on this?
Nobelist Roald Hoffman has directly taken on a topic that many chemists find painful: why aren't more chemistry Nobel prizes given, to, well. . .chemists?
". . .the last decade has been especially unkind to "pure" chemists, asa only four of ten Nobel awards could be classified as rewarding work comfortably ensconced in chemistry departments around the world. And five of the last ten awards have had a definite biological tinge to them.
I know that I speak from a privileged position, but I would urge my fellow chemists not to be upset."
He goes on to argue that the Nobel committee is actually pursuing a larger definition of chemistry than many chemists are, and that we should take it and run with it. Hoffmann says that the split between chemistry and biochemistry, back earlier in the 20th century, was a mistake. (And I think he's saying that if we don't watch out, we're going to make the same mistake again, all in the name of keeping the discipline pure).
We're going to run into the same problem over and over again. What if someone discovers some sort of modified graphene that's useful for mimicking photosynthesis, and possibly turning ambient carbon dioxide into a useful chemical feedstock? What if nanotechnology really does start to get off the ground, or another breakthrough is made towards room-temperature superconductors, this time containing organic molecules? What would a leap forward in battery technology be, if not chemistry? Or schemes to modify secreted proteins or antibodies to make them do useful things no one has ever seen? Are we going to tell everyone "No, no. Those are wonderful, those are great discoveries. But they're not chemistry. Chemistry is this stuff over here, that we complain about not getting prizes for".
Here's another intriguing Alzheimer's result, in a field that could certainly use some. A group at Case Western (no, not the gyre guy) has reported on the effects of the RXR ligand Bexarotene (brand name Targretin) in several different mouse models of the disease. Dosing with the compound seems to quickly lower the levels of the soluble forms of beta-amyloid in the rodents' brains, most likely by increasing the expression of the lipoprotein ApoE. (That one's long been associated with Alzheimer's).
Update: a reader notes that Merck seems to have some interest in a related mechanism, using LXR to upregulate ApoE in Alzheimer's. And this upcoming Keystone Conference is sure to feature a lot of interesting discussion on the topic as well.
Follow-up showed that more than the soluble forms had been cleared, though. A significant amount of the insoluble amyloid plaques had been removed at long time points (several days), and the hypothesis there is some sort of immune reponse (an approach that's been tried for years now through vaccines, with very mixed success). Having a single drug (which has already been approved for some oncology indications) that appears to work rapidly on both the soluble and insoluble forms of amyloid is both dramatic and unexpected.
The Case Western group saw improvement on behavior and memory in the mice as well, as you might well hope. Since this drug has already been through the FDA, you'd hope that the way is clear to trying this same idea out in human patients. That, of course, is where many bright ideas in Alzheimer's have come to grief. If the drug doesn't affect ApoE expression in quite the same way, or if the lipoprotein doesn't act similarly in humans, or if that plaque-clearing mechanism, whatever it is, doesn't kick in or goes awry, then these results could end up just being another wonderful rodent study that didn't translate. But it's absolutely worth finding out, and I hope that we do in short order.
Update: this study is already triggering more interest in the Alzheimer's community than can be contained, which has been the story every time something promising shows up. . .
Matthew Herper at Forbes has a very interesting column, building on some data from Bernard Munos (whose work on drug development will be familiar to readers of this blog). What he and his colleague Scott DeCarlo have done is conceptually simple: they've gone back over the last 15 years of financial statements from a bunch of major drug companies, and they've looked at how many drugs each company has gotten approved.
Over that long a span, things should even out a bit. There will be some spending which won't show up in the count, that took place on drugs that got approved during the earlier part that span, but (on the back end) there's spending on drugs in there that haven't made it to market yet, too. What do the numbers look like? Hideous. Appalling. Unsustainable.
AstraZeneca, for example, got 5 drugs on the market during this time span, the worst performance on this list, and thus spent spent nearly $12 billion dollars per drug. No wonder they're in the shape they're in. GSK, Sanofi, Roche, and Pfizer all spent in the range of $8 billion per approved drug. Amgen did things the cheapest by this measure, 9 drugs approved at about 3.7 billion per drug.
Now, there are several things to keep in mind about these numbers. First - and I know that I'm going to hear about this from some people - you might assume that different companies are putting different things under the banner of R&D for accounting purposes. But there's a limit to how much of that you can do. Remember, there's a separate sales and marketing budget, too, of course, and people never get tired of pointing out that it's even larger than the R&D one. So how inflated can these figures be? Second, how can these numbers jibe with the 800-million-per-new-drug (recently revised to $1 billion), much less with the $43 million per new drug figure (from Light and Warburton) that was making the rounds a few months ago?
Well, I tried to dispose of that last figure at the time. It's nonsense, and if it were true, people would be lining up to start drug companies (and other people would be throwing money at them to help). Meanwhile, the drug companies that already exist wouldn't be frantically firing thousands of people and selling their lab equipment at auction. Which they are. But what about that other estimate, the Tufts/diMasi one? What's the difference?
As Herper rightly says, the biggest factor is failure. The Tufts estimate is for the costs racked up by one drug making it through. But looking at the whole R&D spend, you can see how money is being spent for all the stuff that doesn't get through. And as I and many of the other readers of this blog can testify, there's an awful lot of it. I'm now in my 23rd year of working in this industry, and nothing I've touched has ever made it to market yet. If someone wins $500 from a dollar slot machine, the proper way to figure the costs is to see how many dollars, total, they had to pump into the thing before they won - not just to figure that they spent $1 to win. (Unless, of course, they just sat down, and in this business we don't exactly have that option).
No, these figures really show you why the drug business is in the shape it's in. Look at those numbers, and look at how much a successful drug brings in, and you can see that these things don't always do a very good job of adding up. That's with the expenses doing nothing but rising, and the success rate for drug discovery going in the other direction, too. No one should be surprised that drug prices are rising under these conditions. The surprise is that there are still people out there trying to discover drugs.
Everyone who's done drug discovery has encountered this situation: you get what looks like a hit in a screening assay, but when you re-check it with fresh material, it turns out to be inactive. So you go back to the original batch, but it's still active. There are several possibilities: if that original batch was a DMSO solution, perhaps the compound has done something funny on standing, and you don't have what you thought you had. Maybe the DMSO stock was made from the wrong compound, or was mislabeled somehow - in which case, good luck figuring out what's really in there. If the original batch was a solid, the first thing to do is a head-to-head analysis (NMR, LC-mass spec) between the two. (That sort of purity check is actually the first thing you should do with interesting screening hits in general, as experienced chemists will have had several chances to learn).
But if those assay numbers repeat for both batches, you're in the realm of the Infinitely Active Impurity. The thinking is, and it's hard to find fault with it, that there must be something in Batch One that's causing the assay to light up, something that's not present in Batch Two. I found myself in this situation one time where the problem turned out to be that Batch One had the right structure, except it was a zinc complex, a fact the original submitters apparently hadn't appreciated. (We had to send out for metals analysis to confirm that one). In that case, the assay could be made to show a hit by adding zinc to most any compound you wanted, which wasn't too useful.
Most of the time, chasing after these things proves futile, which is frustrating for everyone involved. But not always. There's a recent example of a successful impurity hunt in ACS Medicinal Chemistry Letters, from a group at Pfizer searching for inhibitors of kynurenine aminotransferase II.
One of the hits was that compound 6 shown in the figure, but a second batch of it showed no activity at all. They dug into the original sample, and found that there was a touch of the N-hydroxy compound in it, and that was the reason for all the activity. Interestingly, it turns out that the amino group was involved in a covalent interaction with the enzyme's cofactor, pyridoxal-5′-phosphate (PLP). That's one of the things you probably want to suspect when you find such tiny amounts of a compound having such a large effect.
It's not a deal-breaker, but it's something to keep in mind. The whole topic of irreversible inhibitors has come up around here before, but it's worth another post soon, in light of the recent acquisition of Avila Pharmaceuticals, who specialized in this field. In this case, the compound isn't covalently attached to the protein, but rather to its bound cofactor, which would make people breath a bit easier. (And the group responsible for the covalency, an amine, isn't something to worry about, either).
Still, it's interesting to see this part of the paper:
"Although irreversible inhibition was not one of our lead criteria at the outset of the program, maintaining this attribute of 7 was a high priority through our optimization efforts. The potential advantages of irreversible inhibitors include low dose requirements and reduced off-target toxicity."
I say that because increased off-target toxicity has always been the worry with covalent drugs. But there's been a real revival of interest in the last few years - more on this next week.
If you're interested, here's the FDA's draft of their guidance on biosimilars, up on their site today. This is a slow-moving story that's going to end up having a big effect on the industry. Look at the number of top-selling drugs that are biologics, look at how long they're lasting on the market (patent protection or not), and you wonder how much competition will emerge, how successful it'll be, and how tricky it will be to approve things. If I were a real-time journalist, I'd mark this "Developing", but while that's true, in this case it means "Developing. . .over the next ten years".
I'd like to take a few minutes to remember someone that everyone in R&D should spare a thought for: Roger Boisjoly, If you don't know the name, you'll still likely know something about his story: he was one of the Morton Thiokol engineers who tried, unsuccessfully, to stop the Challenger space shuttle launch in 1986.
Here's more on him from NPR (and from one of their reporters who helped break the inside story of that launch at the time). Boisjoly had realized that cold weather was degrading the O-ring seals on the solid rocket boosters, and as he told NPR, when he blew the whistle:
"We all knew if the seals failed the shuttle would blow up."
Armed with the data that described that possibility, Boisjoly and his colleagues argued persistently and vigorously for hours. At first, Thiokol managers agreed with them and formally recommended a launch delay. But NASA officials on a conference call challenged that recommendation.
"I am appalled," said NASA's George Hardy, according to Boisjoly and our other source in the room. "I am appalled by your recommendation."
Another shuttle program manager, Lawrence Mulloy, didn't hide his disdain. "My God, Thiokol," he said. "When do you want me to launch — next April?"
When NASA overruled the Thiokol engineers, it was with a quote that no one who works with data, on the front lines of a project, should ever forget: "Take off your engineer hat," they told Boisjoly and the others, "and put your management hat on". Well, the people behind that recommendation managed their way to seven deaths and a spectacular setback for the US space program. As Richard Feynman said in his famous Appendix F to the Rogers Commission report, "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled".
Not even with our latest management techniques can nature be fooled, no matter how much six-sigma, 4S, and what-have-you gets deployed. Nothing else works, either. Nature does not care where you went to school, what it says on your business cards, how glossy your presentation is, or how expensive your shirt. That's one of the things I like most about it, and I think that any scientist should know what I'm talking about when I say that. The real world is the real world, and the data are the data.
But it's up to us to draw the conclusions from those numbers, and to get those conclusions across to everyone else. It may well be true, as Ed Tufte has maintained, that one of the tragedies of the Challenger launch was that the engineers involved weren't able to do a clear enough presentation of their conclusions. Update: see this account by Boisjoly himself on this point. It might not have been enough in the end; there seem to have been some people who were determined to launch the shuttle and determined to not hear anything that would interfere with that goal. We shouldn't forget this aspect of the story, though - it's incumbent on us to get our conclusions across as well as we can.
Well, then, what about Nature not caring about how slick our slide presentations are? That, to me, is the difference between "slick" and "effective". The former tries to gloss over things; the latter gets them across. If the effort you're putting into your presentation goes into keeping certain questions from being asked, then it's veered over to the slick side of the path. To get all Aristolelian about it, the means of persuasion should be heavy on the logos, the argument itself, and you should do the best job you can on that. Steer clear of the pathos, the appeal to the emotions, and you should already be in a position to have the ethos (the trustworthiness of the speaker's character) working for you, without having to make it a key part of your case.
But today, spend a moment to remember Roger Boisjoly, and everyone who's ever been in his position. And look out, be very careful indeed, if anyone ever asks you to put your management hat on.
Here's a helpful translation, and there's more truth in it than there should be. My rule of thumb is to be extremely suspicious of a methods paper that doesn't have at least a couple of low-yielding or "NR" entries. If they aren't there, it means that someone didn't do enough experiments (or, worse, that they're not telling you about those little details).
Announcing layoffs along with a stock buyback - let's think about what that means. AstraZeneca did that just the other day, and they're far from the only ones in this industry (or others) spending billions to buy back their own shares while they're cutting costs elsewhere.
We already know what the companies have to say about what it means. All you have to do is say "shareholder value" and you're most of the way there. Mix in "continued commitment" and "cost containment", fit 'em all together with a verb or two, and you've got yourself an instant press release. And we also know what the investment community thinks: they like it. Go back over the news stories that have come out when a buyback is announced, and all the quotes will be about how large the amount is, whether it's in line with what people were expecting, or if it's one of those good moments when the company is spending even more to buy back its shares. No one would be so foolish as to announce a truly inadequate-looking stock repurchase.
That's a key point. As far as I can tell, share buybacks have two purposes. There's the obvioius one of trying to provide some steady buying activity in the stock and (in theory) a floor for its price, while retiring shares to decrease the float (and increase earnings-per-share). But the other reason is signaling. "We think our stock's worth buying at this price", the company is saying, "and so should you. We care enough about our existing shareholders to spend money tending the share price for them. Please don't sell us, or downgrade us. We'll buy back even more - promise!"
Signaling is, I think, the greater of those two. There's a lot of room to question the actual financial effectiveness of stock buybacks. As one person in that link notes, if you want to reward current shareholders with cash, you should pay them a dividend. Trying to keep your stock price up (even if the plan were to work) only really rewards the people who sell your stock and realize the gains. (See below for who some of those people are, though).
That signaling had better be worth something. It goes without saying, or should, that the money being used to buy back shares could also be put back into a company's actual business. That's another signal, one that makes me grit my teeth. To me, a stock buyback has always said "We're willing to tell the world that we think that buying our own shares will provide a better return than investing in what we're supposed to be doing for a living." And why would you tell the world something like that? Isn't that also saying "We can't think of much else to do with this cash, what with our business in the shape it's in, and parking it in an investment fund would be sort of embarrassing, So we might as well use it to bribe the Street. God knows it's the only language they understand."
There are other people willing to put it in just those terms. That "Marketplace" link above features a quote from William Lazonick of UMass-Lowell (note: affiliation fixed after original post), who's not keeping his views bottled up:
"Here we have all these companies obsessed, basically with keeping their stock prices up, and saying the best thing that they can do with their money is spend billions of dollars on stock. And my view of that is, any company that says that they have nothing to better do with their money, the CEO should be fired."
A CEO's reply to that might well be that this attitude is why Lazonick's a professor rather than a CEO himself. But is he wrong? Here's a recent paper of his, which contends that the problem is that share buybacks are all too effective. Lazonick says that the problem is tied to the increasing compensation of top executives in shares and options, and that using company money to prop up the stock price is, basically, market manipulation to reward the executives.
He has some figures from our own industry: From 1997 to 2009 "Amgen did
repurchases equal to 99 percent of R&D expenditures, Pfizer 67 percent, Merck 62
percent, and Johnson & Johnson 57 percent." It could be worse - companies in the IT sector have often managed to spend even more than their R&D budgets on repurchases, partly because they increased the number of shares outstanding so hugely during the dot-com boom years.
One complication with the market-manipulation view is that stock buybacks don't correlate very well with total stock returns. If anything, the correlation is negative: companies (and sectors) that spend the most on repurchases have lower returns. Of course, there's a correlation/causation problem here - perhaps those returns would have been even lower without the buybacks. But there's clearly no slam-dunk financial case to be made for repurchases.
Except one: that they're often the easiest and least controversial use of the money. Companies get criticized if they sit on cash reserves, and they get criticized for missing earnings-per-share numbers. Why not try to address both at the same time? And without having to actually think very hard about what to invest in? I think that Pfizer's Ian Read is being truthful when he says things like this:
Pfizer declined to make an executive available to discuss its policy. But in a statement, the company said it “remains committed to returning capital to shareholders through share buybacks and dividend payments.”
As for the cut in research spending in February, Pfizer said it has “accelerated our research strategy and made important changes to concentrate our efforts to deliver the greatest medical and commercial impact.”
In a conference call with analysts this month, Pfizer’s chief executive, Ian C. Read, said his company would “continually look” for acquisitions that would increase revenue growth. But in deciding how to use the proceeds from recent asset sales, he said “the case to beat is share repurchase.”
Well, we last got into arguments about industrial espionage here in December, so it's what? February already? Then here we go: from C&E News, we have this:
Federal prosecutors charged last week that Chinese government officials played a role in the theft from Dupont of technology to manufacture the paint pigment titanium dioxide.
According to a document filed on Jan. 31 in U.S. District Court for the Northern District of California, Federal Bureau of Investigation officials obtained letters in a search of the home of Walter Liew. The letters show that Liew “was tasked by representatives of the People’s Republic of China government to obtain technology used to build chloride-route titanium dioxide factories,” prosecutors say.
Here are more details from Reuters. More on this as it develops. . .
I've been meaning to mention the very interesting work that's shown up on tau protein in Alzheimer's. That's generally taken a back seat to amyloid in the protein-pathologies-of-Alzheimer's derby, but no one has been able to rule it out as a causative event, either. And the progress of tau pathology through the brain is quite suggestive - it tends to start in one region (the entorhinal cortex) and spread from there. The question is, what's driving that process? Is it tau itself spreading, or perhaps something inside the cell that causes tau problems is spreading, or is it some set of external conditions (that lead to tau pathologies) which is spreading?
This latest work goes a good way towards settling that question. (Here's one group's paper in PLoS One; the other paper in Neuron doesn't seem to be up yet, which has caused some controversy). The researchers in question engineered mice that express human tau protein localized to the entorhinal cortex (EC). They then sat back and watched what happened, taking sample along the way.
And what happened was a spectacular result. They found human tau in the EC initially, as expected. But over time, it began to show up in brain regions that are synaptically connected to the EC, and then it spread to the regions that are connected to those. This is human tau protein, remember - the only cells in the brains of these mice that should be able to make it are in the EC. In other words, the protein itself appears to be spreading from neuron to neuron, apparently through the synaptic junctions:
In general, our NT mouse model replicates the spatial and temporal aspects of the earliest stages (I–III) of Braak staging of tauopathy in Alzheimer's disease. We have demonstrated that tau pathology initiating in the EC can spread to other synaptically connected brain areas as the mice age, supporting the idea that AD progresses via an anatomical cascade as opposed to individual events occurring in differentially vulnerable regions.
They also now have a very interesting (and potentially very useful) mouse model of Alzheimer's pathology. There are still a huge number of open questions about Alzheimer's, don't get me wrong. But this is a real advance, in a field that doesn't see as many of those as everyone would like. Now to figure out how that protein is spreading (How's it excreted from the cell? How's it taken up by the next ones in line?) and why.
This is not the sort of academic-industry interaction I had in mind. There's a gigantic lawsuit underway between Agios and the Abramson Institute at the University of Pennsylvania, alleging intellectual property theft. There are plenty more details at PatentBaristas:
According to the complaint filed in the US District Court Southern District Of New York, the Institute was created by an agreement between The Abramson Family Foundation and the Trustees of the University of Pennsylvania. The Foundation donated over $110 Million Dollars to the Institute with the condition that the money was to be used to explore new and different approaches to cancer treatment.
Dr. Thompson later created a for-profit corporation that he concealed from the Institute. After a name change, that entity became the Defendant Agios Pharmaceuticals, Inc. Dr. Thompson did not disclose to the Institute that at least $261 million had been obtained by Agios for what was described as its “innovative cancer metabolism research platform” – i.e., the description of Dr. Thompson’s work at the Institute. Dr. Thompson did not disclose that Agios was going to sell to Celgene Corporation an exclusive option to develop any drugs resulting from the cancer metabolism research platform.
Three people with knowledge of Dr. Thompson’s version of events, two of whom would speak only on condition of anonymity because of the litigation, said that the University of Pennsylvania knew about Dr. Thompson’s involvement with Agios and even discussed licensing patents to the company, though no agreement was reached.
“When you start a company like this, you want to try to dominate the field,” said Lewis C. Cantley, another founder of Agios and the director of the cancer center at the Beth Israel Deaconess Medical Center in Boston. “The goal was to get as many patents as possible, and it was frustrating that we weren’t able to get any from Penn.”
Michael J. Cleare, executive director of Penn’s Center for Technology Transfer, declined to discuss whether negotiations had been held but said, “Yes, Penn knew about Agios.”
So, as the lawyers over at PatentBaristas correctly note, this is all going to come down to what happened when. And that's going to be determined during the discovery process - emails, meeting minutes, memos, text messages, whatever can establish who told what to whom. If there's something definitive, the whole case could end up being dismissed (or settled) before anything close to a trial occurs - in fact, that would be my bet. But that's assuming that something definite was transferred at all:
A crucial question, some patent law and technology transfer specialists said, could be whether Dr. Thompson provided patented technology to Agios or merely insights.
“If somebody goes out and forms a company and doesn’t take patented intellectual property — only brings knowledge, know-how, that sort of thing — we wouldn’t make any claims to it,” said Lita Nelsen, director of the technology licensing office at the Massachusetts Institute of Technology.
In its complaint, the Abramson institute does not cite any specific patents. It says Penn did not pursue the matter because Dr. Thompson had told the university that his role in Agios did not involve anything subject to the university’s patent policies. The lawsuit says the institute did not find out about Dr. Thompson’s role in Agios until late 2011.
There will probably be room to argue about what was transferred, which could get expensive. That accusation of not finding out about Agios until 2011, though, can't be right, since he's mentioned all over their press releases and meeting presentations at least two years before that. But no matter how this comes out, this is not the way to build trust. Not quite.
I'm sitting in the main conference hall at the SLAS meeting as things get going. And I have to say, it's a big crowd, and there are some very interesting things on the agenda. But I've just seen something I've never seen before: an ad being played on the screens for the entire attending audience, just before the keynote address. Thermo Scientific has clearly put a lot of money into this meeting (as well they should), but sitting through a NFL-voice-over style ad during the kickoff to a scientific meeting ("The power to win!" is a real first.
As mentioned before, I'm going to be moderating a panel today on industry-academic collaborations in drug discovery at the SLAS meeting in San Diego. It starts at 10:30 AM Pacific Time, and you can access a live stream of the event here (scroll down).
And if anyone has more questions on the subject they'd like to see raised (or things that they'd rather not see raised again!), please add them to the comments for this post. I'll be checking it during the day.
For a really stunning electron micrograph of the thinnest possible layer of glass, see here. (If you don't have journal access, here's a release with some details). What's even more striking is that the semi-random arrangement of atoms is basically an exact match of a hypothesis from 1932 by W. H. Zachariasen at Chicago.
And maybe it's just me, but high-resolution images of molecular structure like this still give me the shivers. I mean, I've seen all sorts of electron density maps from X-ray crystallography, but somehow this sort of thing gives one a more direct feeling of looking at the individual atoms. And for some reason, that seems like something Man Was Not Meant to Do - perhaps it's all those old elementary school textbooks that told me that atoms could never be seen. (Then again, philosopher Mortimer Adler made the same assumption, as I found to my surprise when I read his Ten Philosophical Mistakes, on page 184 if you're keeping score at home.
The editor of the journal Life has published an attempt at detailing how the notorious Andrulis paper managed to make its way into print. See how convincing you find it. In the course of explaining that it can be hard to find reviewers for interdisciplinary topics, and how the journal tries to find reputable people in each field (and carefully checks author suggestions for reviewers), we have this:
Life is a new journal that deals with new and sometime difficult interdisciplinary matters. Consequently, the journal will occasionally be presented with submitted articles that are controversial and/or outside conventional scientific views. Some papers recently accepted for publication in Life have attracted significant attention. Moreover, members of the Editorial Board have objected to these papers; some have resigned, and others have questioned the scientific validity of the contributions. . .
. . .In the case of the Dr. Andrulis’s long paper, the two reviewers were both faculty members of reputable universities different than the author’s and both went to considerable trouble presenting lengthy review reports. Dr. Andrulis revised his manuscript as requested, and the paper was subsequently published.
Really? Is that how it really went? I know what I would have said if they'd sent the paper to me: that it was a perfect example of what happens when an active, learned mind begins to slip loose from its moorings, and that while the paper appeared to have no scientific merit at all, it was quite useful as a diagnostic sign of oncoming psychosis.
If you only read the Life editor's remarks without reading any of the original paper, you might find them reasonable. But that's because you haven't been exposed to a theory that purports to explain the abiotic origins of life, the underlying principles of biochemistry, the formation of the solar system, the expansion of the universe, global weather patterns, the structure of cellular membranes, the distributions of comets and asteroids, the origins of riboviruses, the protein folding problem, the nature of biological aging, and the unification of quantum mechanics with general relativity. I have not made any of that up, it's all in the paper, and I would very much like to see a reviewer who could let all that go past. "Publish with revisions", sure.
From several reports, here's what I have on AstraZeneca's plans in Waltham: they've told people there that cuts are coming. But they haven't gotten very specific on when, or who, or how many. All those questions (that is, all the questions there could be) are under review.
Pfizer has done this to their people before, as have other companies in the throes of layoffs, and it's the only way I know to actually push morale and productivity down even further in such a situation. You come to work for weeks, for months, not knowing if your, your lab, or your whole department is heading for the chopping block. All you're sure of is that someone is. And will your own stellar performance persuade upper management to keep you, when the time comes? Not likely, under these conditions - it'll more likely be the sort of thing where they draw lines through whole areas. Your fate, most people feel at these times, is not in your own hands. A less motivating environment couldn't be engineered on purpose.
But that's what AZ's management has chosen to do at their largest research site in North America. I hope that they enjoy the results. But then (and more on this later), these are the people who have chosen to spend billions buying back their own stock rather than put it into research in the first place. It's not like the score isn't already up there on the big screen for everyone to see.
Update: as mentioned in the comments, this does at least give everyone a warning bells, and a chance to explore other options, as they say. And that's true. AZ employees, though, have been seeing nasty cuts for a while now, and have been well aware that they're not in a stable environment. It's hard to make the decision to leave, but there have been plenty of chances to think about it in the last two or three years.
But I was actually arguing against the company's Waltham strategy from the viewpoint of upper management, on their terms. It's better for employees to have some warning, but I think it's better, for a company, to cut if you're going to cut, and get it over with. If you say that deep cuts are coming, you should do the actual deed as soon as you can. Then you tell the departments that are left, "OK, the storm has passed. Let's try to turn this thing around". But this current situation is the worst of both worlds. "All right, people, here come the big cuts: this site's closed, that site's closed. But your site, well, we don't really want to close it, but we still haven't had time to work out how much to shrink it. Yeah, this was supposed to be the big announcement, but it's just been really busy - you know how it is. We're going to get around to you. Pretty soon. And pretty deep. But we don't know which parts to lop off, not just yet. Back to work, everyone!"
Fluorine NMR is underused in chemistry. Well, then again, maybe it's not, but it's one of those thing that just seems like it should have more uses than it does. (Here's a recent bookon the subject). Fluorine is a great NMR nucleus - all the F in the world is the same isotope, unless you're right next to a PET scanning facility - and the different compound show up over a very wide range of chemical shifts. You've got that going for you, coupling information, NOE, basically all your friends from proton NMR.
There's a pretty recent paper showing a good use of all these qualities (blogged about here at Practical Fragments as well). A group at Amgen reports on their work using fluorine NMR as a fragment-screening tool. They can take mixtures of 10 or 12 compounds at a time (because of all those different chemical shifts) and run the spectra with and without a target protein in the vial. If a fragment binds, its F peak broadens out (you can even get binding constants if you run at a few different concentrations). A simple overlay of the two spectra tells you immediately if you have hits. You don't need to have any special form of the protein, and you don't even need to run in deuterated solvents, since you're just ignoring protons altogether.
Interestingly, when they go on to try other assay techniques as follow-up, they find that the fluorines themselves aren't always a key part of the binding. Sometimes switching to the non-fluorinated version of the fragment gives you a better compound; sometimes it doesn't. The binding constants you get from the NMR, though, do compare very well to the ones from other assays.
The part I found most interesting was the intra-ligand NOE example. (That's also something that's been done in proton NMR, although it's not easy). They show a case where 19F ligands do get close enough to show the effect, and that a linked version of the two fragments does, as you'd hope, make a much more potent compound. That's the sort of thing that fragment people are always wanting to know - what fits next door to my hit? Can they be linked together? Fragment linking has its ups and downs, going back to the Abbott SAR-by-NMR days. That was a technique that never really panned out, as far as can be seen, but this is at least an experimentally easy way to give it a shot. (Of course, the chances of the fluorines on your ligands actually being pointed at each other is probably small, so that does cancel things out a bit).
Overall, it's a fun paper to read - well, allowing for my geeky interests, it is - and perhaps it'll lead a few more people to think of things that could be done with fluorine NMR in general. It's just sitting there, waiting to be used. . .
Update: it's all true. 7,300 job cuts in total. Montreal and Soedertaelje (Sweden) to close. And AZ seems to be all but getting out of pain/CNS, cutting down to a few dozen people who will do external collaborations. Oh, and they're buying back 4.5 billion dollars worth of stock, instead of spending that money on what the company tries to make a profit on. So there is that. If you'd like to hear AZ tell you how all this is making them more productive, here's the press release.
I've been hearing reports, which I hope are incorrect but as yet have no reason to doubt, that the AstraZeneca site in Montreal is set to close as a result of this latest round of layoffs. The official announcement is coming in a few hours - I wanted to put up this post so that more details can be added in the comments as people get them.
This will be bad news for the Montreal research community, which has already been taking it pretty hard over the last few years. As that link shows, though, they least had a number of employers to start with, as opposed to some of the UK sites (and others) that had been R&D monocultures when their closures hit. But there's no way to really put a bright face on this stuff. . .
Noted chem-blogger Milkshake seems to have had a close call with a fire started by a tiny potassium hydride residue. It looks like he made it through without serious injury, but that sort of thing will definitely shake a person up.
I hate potassium hydride. Its relative sodium hydride is a common reagent, but it's much tamer (and even so, can cause interesting fires - I knew someone who ignited a heap of it on the pan of a balance while he was weighing it out, which slowed things down a bit). Sodium hydride is usually sold as a 60% dispersion, a dark grey powder soaked with mineral oil to keep it from deteriorating too quickly (and to keep it from setting everything on fire). You can buy 95% sodium hydride, the dry stuff, and there are people who swear by it, but I tend to sweat at it. You never know if it's been stored properly; you may be adding a slug of sodium hydroxide to your reaction without knowing it. And there's the fire part. You'll want to move briskly if you're using the 95%, and I'd pick a day when the humidity is low.
But potassium hydride, that's another beast entirely. It makes the sodium compound look like corn meal, in terms of how forgiving it is. You can't get away with the clumpy oily powder form at all - traditionally, KH is sold as a gooey dispersion of grey powder sitting under a few inches of mineral oil. If it's well dispersed, it's supposed to be 35%. You shake the stuff up until you think it's even mixed, then pipet out the amount of gunk that corresponded to the KH contained therein. Sure you do. What actually happens is that you pipet out the stuff, noticing while you do that it's already settling out inside the pipet, thereby to clog it up when you try to transfer it. No fun.
It's becoming available now dispersed in a block of wax, which is not such a bad idea at all. Wax isn't any harder to get out of your reaction than oil is, and you can carve off chunks and weigh them without so many what-am-I-doing moments. But Milkshake worries that this ease of use will lead to more fires during workups (which is where his reaction ran into trouble), and he may well be right. If you're going to use KH, don't let your guard down.
At Xconomy, Luke Timmerman has words of wisdom for people in the small biotech world: "Never back smug". That's a quote from venture capitalist Bob More, and it rings true to me as well. Says TImmerman: ". . .it strikes me that life sciences has more than its share of spinmeisters, hypesters, smoke-and-mirrors actors, and worse."
Then there’s smugness, that arrogance or sense of superiority. Developing innovative new drugs or devices requires a strong ego, high IQ, stamina, an inspiring personality that attracts other people, and other things. Often, that combination spills over into smugness or arrogance. More says he watches for a lot of the same cues that his sister, a teacher, watches for. . .
I suspect that many readers will have encountered this trait (very occasionally) in their careers. There's a particular danger in the sciences, because (on the one hand) there's so much to know, that a given person does indeed have a good chance of knowing something that others don't. But on that inevitable other hand, this knowledge is set against a background of the huge, vast, pile of what we don't know - and if you keep that perspective, that knowing little smile just starts to look ridiculous.
And consider the audience - scientists, good ones, pride themselves on curiosity and being able to master new material. That means that "You don't have to know about that" or "Don't you worry about that, that's my department" (not to mention "Oh, you probably wouldn't understand") aren't going to get a good reception, not from anyone who could be of any help, anyway. Someone with that kind of attitude ends up driving away people who are smart, competent, and motivated - they won't put up with it.