About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
August 30, 2013
I've had several requests for details about the time I was a Jeopardy! contestant, since I mentioned it in passing the other day. So for the holiday weekend, I thought I'd provide the story. This was all back in 1995-1996, when I lived in New Jersey, and that's actually how I got into the entire business. Coworkers had told me about how the Merv Griffin production people would be administering the test to get on the show down at the Resort International casino in Atlantic City (also owned by the Griffin company), so I drove down to try it out.
The test was only a short one, meant to be done quickly as a screen, and none of the questions seemed particularly hard. I spent the rest of my time in AC working on my card-counting skills at the blackjack table, which was not too lucrative. In fact, under the rules then - and I'm sure they've gotten no better - the same amount of time and effort applied to almost any other activity would surely have provided a greater return. (But at least they couldn't throw you out, as opposed to Las Vegas).
Not too many days later I got the invite to come down for the longer test, which had many more questions, all of which, I think, were from the $1000 category on the show. This was at the same casino, and I knew that morning, on the drive down, that I was in trouble. I'd gotten a late start, it was rainy, and there was more traffic than I'd counted on. I pulled in a few minutes late, bounded up the escalators, and was met by a lady sitting at a long table in front of a closed door. "I'm sorry", she said, "the test has already started".
"But there's another one in a couple of hours", she said, to my surprise and relief, "so we'll just put you down for that". Just then, there was the sound of someone frantically taking the escalator steps two at a time. Into view came a guy who looked even more frantic than I had - shoes untied, shirt half tucked in, hair sticking up on one side. "Don't worry!" I called. "There's another test later!" He caught his breath while taking in this news, and it was then that I noticed that his hands were full of almanacs and trivia books and the like. We walked off together, and he said "Good, good. . .this will give me time to study up some more!"
"I'm pretty much done with it", I told him. I had been brushing up over the last week on things that I didn't have covered so well - opera, Academy Award winners, some sports records and American presidential trivia - but I wasn't lying to him at all. I figured that if I didn't know something by the day of the test, I was unlikely to remember it when I needed to. "No, I've got to read up on things," the guy said, then turned to me and said "For example, what's the capital of Uzbekistan?"
"Tashkent", I told him, with no hesitation. Science, literature, history, and geography were my strong areas. He looked startled. "Oh s$%&!" he said, and sped off for parts unknown, there to clarify his map of Central Asia. After lunch it was time to take the test (much more challenging), and to wait around while the staff graded our sheets. They then called everyone together and read off the names of the people who had passed. Mister Tashkent did not seem to be among them, and I wondered if I'd fatally psyched him out. We did some dry runs of the game at that point, which served (from what I could see) to weed out the people who kept going "Ah. . .ah. . .um. . ." whenever it came time to answer a question.
And that was that, for a few months. They'd told us that we were on the list as possible contestants, and there was no way of knowing when or if we'd be called. But one day I had a message from LA, with the day of a taping, and I flew out for it quite happily. (I should note that the show covered not one penny of expenses, at least for the regular daily contestants). I showed up at the studio nervous but ready to go.
I got to see a couple of shows taped with some of the other crop of contestants before my turn came, and that gave me a chance to see some of the workings. The key to the whole thing was the moment of picking and answering. You had a chance to read the clue off the monitor while Alex Trebek was reading it out loud, and that was the time to figure out if you knew it and to prepare to try to answer it in the form of a flippin' question. You could not press your contestant's button too early, though - as they explained in detail, that locked you out for a delay period if you tried it, which would almost surely leave you without a shot. Timing was crucial. You had to wait for Trebek to stop speaking, wait about a sixteenth note of time, and then hit your button.
With the other two guys in my taping, that generally meant that all of us sat there poised while Trebek read off an answer, and then suddenly clickityclickityclickclick we'd all hit the buttons, so close to simultaneously as seemed to make no difference. There were a few times that I knew I'd reached out and snatched the right to answer a question, but others where I thought I had (but hadn't), along with a couple where I was as surprised as anyone else when my light came on.
It all happened very quickly, and took a lot of concentration and fast thinking. The effort of reading answers and coming with questions, while simultaneously watching the timing, deciding which category to go for, and keeping up with the score of the game was plenty to deal with. I remember two parts of the game very clearly, though. At one point, the taping paused for the commercial break, and some staff members came out to reapply makeup. I needed quite a bit, and Trebek remarked to the guy "You don't spend that much time on my makeup". "You don't sweat this much, Alex" came the response.
The other part I recall clearly was the Daily Double, which I was actively prospecting for whenever I had control of the board in the second round. I'd lost out on a few questions, and needed it to get back in the game. To my happiness, it came up in Geography, and I bet most of what I could. Up came the answer: "Lake Nasser sits on the border of these two African countries". My brain immediately pictured a map, while I played for time. Nasser could only mean Egypt, but I was having trouble figuring out the second country. "What are Egypt and. . . ." I started, while thinking to myself that it couldn't be Libya, that was a total desert out there. . .and the other side of the country, that was a coastline, the Red Sea. Trebek was looking at me, eyebrows raising a bit in anticipation, as if to say "You're not going to blow this one, are you?", as I finished with ". . .Sudan!". He gave a quick smile, and we were off again.
By the end of the game, I was in second place by $200 or so, a close race. The final Jeopardy category was English Literature, which gave me great happiness. The clue was "Mellors is the gamekeeper in this novel", and I immediately wrote "Lady Chatterly's Lover" on the scraggly, time-delayed screen. My only hope was that the guy ahead of me didn't get it, but alas, we all did. I lost, $13,300 to $13,100. The sensation was exactly that of coming off a carnival ride; the first thing I wanted was to go around again.
What valuable prizes did I win? Furniture, which I decided later to decline. I believe that a lot of it gets turned down like this, and probably for similar reasons to mine. I didn't care for the style, and had no place to put it. I could have perhaps sold it to someone, but this was pre-Craigslist, and in the meantime I was going to be paying tax on the full retail value, both to the IRS and to the state of California (a state tax form had been included in my going-away packet). A couple of weeks after I got back home, a package showed up with some boxes of Miracle-Gro, various flavors of cough drops, and other "Some contestants may also receive. . ." items (but alas, no Rice-a-Roni, which my family never ate while I was growing up, and which I always associated solely with game shows).
So that was my Jeopardy! experience. I enjoyed it tremendously, and I told people when I got back that I would have liked to be a contestant on the show for a living. A diet of Miracle-Gro and cough drops might have eventually impaired my button-pressing response times, though.
+ TrackBacks (0) | Category: Blog Housekeeping
Well, it's the Friday before a long holiday weekend here in the US, so I don't know if this is the day for long, detailed posts. I do have some oddities in the queue, though, so this is probably a good day to clear them out.
For starters, here's one in the tradition of the (in)famous Andrulis "gyre" paper. Another open-access publisher (SAGE) has an unusual item in their journal Qualitative Inquiry. (Some title, by the way - you could guess for days about what might appear under that category). The paper's title gets things off to a fine start: "Welcome to My Brain". And the abstract? Glad you asked:
This is about developing recursive, intrinsic, self-reflexive as de-and/or resubjective always evolving living research designs. It is about learning and memory cognition and experiment poetic/creative pedagogical science establishing a view of students ultimately me as subjects of will (not) gaining from disorder and noise: Antifragile and antifragility and pedagogy as movements in/through place/space. Further, it is about postconceptual hyperbolic word creation thus a view of using language for thinking not primarily for communication. It is brain research with a twist and becoming, ultimately valuation of knowledges processes: Becoming with data again and again and self-writing theory. I use knitting the Möbius strip and other art/math hyperbolic knitted and crocheted objects to illustrate nonbinary . . . perhaps. Generally; this is about asking how-questions more than what-questions.
Right. That's word-for-word, by the way, even though it reads as if parts of speech have been excised. Now, I do not, sadly, have access to journals with the kind of reach that Qualitative Inquiry displays, so I have not attempted to read the whole text. But the abstract sounds either like a very elaborate (and unenlightening) word game, or the product of a disturbed mind. The Neurobonkers blog, though, has some more, and it definitely points toward the latter:
This article is therefore about developing recursive intrinsic self-reflexive as de- and/or resubjective always evolving living research designs. Inquiry perhaps full stop—me: An auto-brain—biography and/or a brain theorizing itself; me theorizing my brain. It is thus about theorizing bodily here brain and transcorporeal materialities, in ways that neither push us back into any traps of biological determinism or cultural essentialism, nor make us leave bodily matter and biologies behind.
Apprarently, most of the manuscript is taken up with those "This is about. . ." constructions, which doesn't make for easy reading, either. At various points, a being/character called "John" makes appearances, as do recurring references to knitting and to Möbius strips. Brace yourselves:
Knitting John, John knitting. Knitting John Möbius. Möbius knitting John. Giant Möbius Strips have been used as conveyor belts (to make them last longer, since “each side” gets the same amount of wear) and as continuous-loop recording tapes (to double the playing time). In the 1960’s Möbius Strips were used in the design of versatile electronic resistors. Freestyle skiers have named one of their acrobatic stunts the Möbius Flip. The wear and tear of my efforts. My stunts, enthusiasm knitting. My brain and doubling and John.
OK, that's deranged. And how could anyone at SAGE have possibly reviewed it? This is the same question that came up with the MDPI journals and the Andrulis paper - five minutes with this stuff and you feel like calling up the author and telling them to adjust their dosages (or perhaps like adjusting yours). This sort of thing is interesting in a roadside-accident sort of way, but it also calls open-access publishing into disrepute. Maybe it's time for not only a list of predatory publishers, but a list of nonpredatory ones that freely admit garbage.
+ TrackBacks (0) | Category: The Scientific Literature
August 29, 2013
As someone who will not be seeing the age of 50 again, I find a good deal of hope in a study out this week from Eric Kandel and co-workers at Columbia. In Science Translational Medicine, they report results from a gene expression study in human brain samples. Looking at the dentate gyrus region of the hippocampus, long known to be crucial in memory formation and retrieval, they found several proteins to have differential expression in younger tissue samples versus older ones. Both sets were from otherwise healthy individuals - no Alzheimer's, for example.
RbAp48 (also known as RBBP4 and NURF55), a protein involved in histone deacetylation and chromatin remodeling, stood out in particular. It was markedly decreased in the samples from older patients, and the same pattern was seen for the homologous mouse protein. Going into mice as a model system, the paper shows that knocking down the protein in younger mice causes them to show memory problems similar to elderly ones (object recognition tests and the good old Morris water maze), while overexpressing it in the older animals brings their performance back to the younger levels. Overall, it's a pretty convincing piece of work.
It should set off a lot of study of the pathways the protein's involved in. My hope is that there's a small-molecule opportunity in there, but it's too early to say. Since it's involved with histone coding, it could well be that this protein has downstream effects on the expression of others that turn out to be crucial players (but whose absolute expression levels weren't changed enough to be picked up in the primary study). Trying to find out what RbAp48 is doing will keep everyone busy, as will the question of how (and/or why) it declines with age. Right now, I think the whole area is wide open.
It is good to hear, though, that age-related memory problems may not be inevitable, and may well be reversible. My own memory seems to be doing well - everyone who knows me well seems convinced that my brain is stuffed full of junk, which detritus gets dragged out into the sunlight with alarming frequency and speed. But, like anyone else, I do get stuck on odd bits of knowledge that I think I should be able to call up quickly, but can't. I wonder if I'm as quick as I was when I was on Jeopardy almost twenty years ago, for example?
(If you don't have access to the journal, here's the news writeup from Science, and here's Sharon Begley at Bloomberg).
+ TrackBacks (0) | Category: Aging and Lifespan | The Central Nervous System
There's an article at The Atlantic titled "More Money Won't Win the War on Cancer". I agree with the title, although it's worth remembering that lack of money will certainly lose it. Money, in basic research, is very much in the "necessary but not sufficient" category.
The article itself is making the case of a book by Clifton Leaf, The Truth in Small Doses, a project that started with this article in Fortune in 2004. Here's the pitch:
What if a lack of research funding isn’t really the problem? One reason we aren’t making faster progress against cancer, according to Leaf, is because the federal grant process often chases the brightest minds from academic labs, and for those who do stay, favors low-risk “little questions” over swinging for the fences.
“More money by itself is not going to solve anything,” Leaf said. “Let’s say we doubled the [National Institutes of Health] budget, that isn’t going to make the lives of researchers better.”
The problem, as Leaf sees it, is with the business of cancer research. Over the last decade or so, “doing science” has reached a crisis stage—a claim many in the cancer community agree with, even if they don’t quite see eye-to-eye with Leaf on all of his conclusions.
His take is that the grant-money situation is making academic researchers spend more and more time just trying to get (or stay) funded, and that they tend to avoid anything that might sound a bit unusual in their applications. He also fears that academic researchers are taking too long to get established, that what might be some of their more creative years are being wasted in lengthy post-docs and struggles for tenure. I think that these are real problems, although they've been coming on for a long time now.
The article seems a bit too focused on the academic side of things; I don't know yet if the book makes the same mistake. Looking at it from industry, I think that the odds are that the first fundamental insights are more likely to come from academia, but I also think that the heavy lifting of turning these into real treatments will be done by industry. The difference between these has come up many times on this site, but it's safe to say that the general public does not appreciate it. The only place a breakthrough in the lab means an instant breakthrough in the clinic is in the movies.
To the extent, though, that people are told that "More Money" is the answer in this field, I think it's good to make the point that it isn't necessarily the limiting factor. Problem is, there's no way to hold a charity insight-raiser, or to set up a box to Donate Good Ideas For the Cure. Medical research, whether industrial or academic, is a pretty esoteric field to most people. There's not much way for an interested lay person to help out directly; the technical background is too much of a barrier. So people raise money, (while some just raise "awareness", a particularly slippery term), because it's the only way that they feel that they can make any difference.
Also, as has been said many times before, the "war on cancer" term is an unfortunate one, because it makes it sound as if there's a single enemy to be defeated. What we have is a war on our own ignorance of biology and medicinal chemistry, and that's going to be a long one. But perhaps I'm making the mistake that oncology pioneer Sidney Farber warned about:
(The patients) with cancer who are going to die this year cannot wait; nor is it necessary, in order to make great progress in the cure for cancer, for us to have the full solution of all the problems of basic research…the history of Medicine is replete with examples of cures obtained years, decades, and even centuries before the mechanism of action was understood for these cures"
Problem is, the only way I can think of to come up with cures without such understanding is to do a lot of out-there clinical trials, at high risk. Farber himself took that approach, famously, and managed to win out. But I'm not sure what appetite we'd have for it on a broad scale.
By the way, if you take a look at the comments section to the Atlantic piece, you'll find the usual stuff. You know - the drug companies don't want to cure cancer, no way. If people would just follow Doctor So-And-So's Miracle Diet, they'd be fine. According to these folks, all this talk of cancer research is a sham to start with. Of course, the number of such "cures" is beyond counting, and since so many of them claim to cure most everything, you'd think that they can't all be right. But somehow this doesn't seem to faze their adherents, who are often enthusiasts for several broad miracle cures simultaneously.
+ TrackBacks (0) | Category: Cancer
August 28, 2013
Azides have featured several times in the Things I Won't Work With series, starting with simple little things like, say, fluorine azide and going up to all kinds of ridiculous, gibbering, nitrogen-stuffed detonation bait. But for simplicity, it's hard to beat a good old metal azide compound, although if you're foolhardy enough to actually beat one of them it'll simply blow you up.
There's a new paper in Angewandte Chemie that illustrates this point in great detail. It provides the world with the preparation of all kinds of mercury azides, and any decent chemist will be wincing already. In general, the bigger and fluffier the metal counterions, the worse off you are with the explosive salts (perchlorates, fulminates, and the others in the sweaty-eyebrows category). Lithium perchlorate, for example, is no particular problem. Sodium azide can be scooped out with a spatula. Something like copper perchlorate, though, would be cause for grave concern, and a phrase like "mercury azide" is the last thing you want to hear, and it just might be the last thing you do.
As fate would have it, though, none of this chemistry is simple. You can get several crystalline forms of mercuric azide, for one thing. The paper tells you how to make small crystals of the alpha form, which is not too bad, as long as you keep it moist and in the dark, and never, ever, do anything with it. You can make larger crystals, too, by a different procedure, but heed the authors when they say: "This procedure is only recommended on a small scale, since crystalline α-Hg(N3)2 is very sensitive to impact and friction even if it is wet. Heavy detonations occur frequently if crystalline α-Hg(N3)2 is handled in dry state".
Ah, but now we come to the beta form. This, by contrast, is the unstable kind of mercury azide, as opposed to that spackle we were just discussing. These crystals are not as laid-back, and tend to blow up even if they're handled wet. Or even if they're not handled at all. Here, see if you've ever seen an experimental procedure quite like this one:
After a few minutes, the deposition of needle-like crystals starts at the interface between the nitrate and the azide layer (β-Hg(N3)2). After some time,
larger crystals tend to sink down, during this period explosions frequently occur which leads to a mixing of the layers, resulting in the acceleration of crystal formation and the growth of a mat of fine needle-like crystals. . .
Hard to keep a good smooth liquid interface going when things keep blowing up in there, that's for sure. Explosions are definitely underappreciated as a mixing technique, but in this case, they are keeping you from forming any larger crystals, a development which the paper says, with feeling, "should be avoided by all means". But it's time to reveal something about this paper: all this mercury azide stuff is just the preparation of the starting material for the real synthesis. What the paper is really focused on is the azide salt of Millon's base [Hg2N+].
Now that is a crazy compound. Millon's base is a rather obscure species, unless you're really into mercury chemistry or really into blowing things up (and there's a substantial overlap between those two groups). A lot of the literature on it is rather old (it was discovered in the early 1800s), and is complicated by the fact that it usually comes along as part of a mixture of umpteen mercury species. But it really is a dimercury-nitrogen beast, and what it's been lacking all these years - apparently - is an azide counterion.
There are two crystalline forms of that one, too, and both preparations have their little idiosyncracies. Both forms, needless to say, are hideously sensitive to friction, shock, and so on - there's no relief there. For the beta form, you take some of that mercuric diazide and concentrated aqueous ammonia, and heat them in an autoclave at 180C for three weeks. No, I didn't just have some sort of fit at the keyboard; that's what it says in the paper. I have to say, putting that stuff in an autoclave has roughly the same priority, for me, as putting it under my armpits, but that's why I don't do this kind of chemistry.
But the alpha form of the Millon's azide, now that one takes some patience. Read this procedure and see what it does for you:
Nitridodimercury bromide [Hg2N]Br (0.396g, 0.8mmol) is suspended in a saturated aqueous solution of sodium azide NaN3 (dest. ca. 3mL) at ambient temperature, resulting in an orange suspension which was stirred for ten minutes. The solution is stored at ambient temperature without stirring under exclusion of light. After one week, the colourless supernatant was removed by decantation or centrifugation and the orange residue was again suspended in a saturated aqueous solution of sodium azide NaN3. This procedure was repeated for 200 to 300 days, while the completion of the reaction was periodically monitored by PXRD, IR and Raman spectroscopy. . .
So you're looking at eight months of this, handling the damn stuff every Monday morning. The authors describe this procedure as "slightly less hazardous" than the other one, and I guess you have to take what you can get in this area. But the procedure goes on to say, rather unexpectedly, that "longer reaction times lead to partial decomposition", so don't go thinking that you're going to get a higher yield on the one-year anniversary or anything. What way to spend the seasons! What might occur to a person, after months of azidomercurial grunt work . . .surely some alternate career would have been better? Farm hand at the wild animal ranch, maybe? Get up when the chickens would be getting up, if they'd made it. . .head out to the barn and slop the wolverines. . .hmm, forsythia's starting to bloom, time to neuter the hyenas soon. . .
No, no such luck. The hyenas will have to remain unspayed, because it's time to add fresh azide to the horrible mercury prep. Only three more months to go! Sheesh.
+ TrackBacks (0) | Category: Things I Won't Work With
August 27, 2013
Luke Timmerman has a good piece on a drug (Bexxar) that looked useful, had a lot of time, effort, and money spent on it, but still never made any real headway. GSK has announced that they're ceasing production, and if there are headlines about that, I've missed them. Apparently there were only a few dozen people in the entire US who got the drug at all last year.
When you look at the whole story, there’s no single reason for failure. There were regulatory delays, manufacturing snafus, strong competition, reimbursement challenges, and issues around physician referral patterns.
If this story sounds familiar, it should—there are some striking similarities to what happened more recently with Dendreon’s sipuleucel-T (Provenge). If there’s a lesson here, it’s that cool science and hard medical evidence aren’t enough. When companies fail to understand the markets they are entering, the results can be quite ugly, especially as insurers tighten the screws on reimbursement. If more companies fail to pay proper attention to these issues, you can count on more promising drugs like Bexxar ending up on the industry scrap heap.
+ TrackBacks (0) | Category: Business and Markets | Drug Development
Blogger Pete over at Fragment-Based Drug Discovery has a tale to tell about trying to get a paper published. He sent in a manuscript on alkane/water partition coefficients to the Journal of Chemical Information and Modeling, only to get back the "not sent out for review" response. That's the worst, the "We're not even going to consider this one" letter. And the odd thing is that, as he rightly put it, this does sound like a JCIM sort of paper, but the editor's response was that it was inappropriate for the journal, and that they had "limited interest" in QSAR/QSPR studies.
So off the paper went to the Journal of Computer-Aided Molecular Design. But as it was going to press, what should appear in JCIM but a paper on. . .alkane/water partition coefficients. There follows some speculation on how and why this happened, and if further details show up, I'll report on them.
But the whole "not sent out for review" category is worth thinking about. I'd guess that most papers that fall into that category truly deserve to be there - junk, junk that's written impossibly and impenetrably poorly, things that should have been sent to a completely different journal. These are the scientific equivalent of Theresa Nielsen Hayden's famous Slushkiller post, about the things that show up unsolicited at a publisher's office. If you're editing a science fiction magazine, you might be surprised to get lyric poetry submissions in another language, or biographical memoirs about growing up in Nebraska - but you'd only be surprised, apparently, if you'd never edited a science fiction magazine before (or any other kind).
But a journal editor can consign all sorts of papers to the outer darkness. At some titles, just getting a manuscript sent out to the referees is an accomplishment, because the usual response is "Stop wasting our time" (albeit not in those exact words, not usually). An author isn't going to be surprised in those cases, but getting that treatment at a less selective journal is more problematic.
+ TrackBacks (0) | Category: The Scientific Literature
August 26, 2013
I recently had a e-mail exchange with someone who wanted me to read one of the many books out there that claims that some particular food additive is poisoning everyone. I'm not linking to the stuff, so I'll call the book's author Dr. Cassandra, for short. We argued about data and mechanisms a bit, but my correspondent also brought up what he felt were many other conspiracies around food and health, and I couldn't agree with him on any of those, either. That led to me writing this to him:
Let me get philosophical: one of the big problems with this sort of thinking is deciding what to trust. If you decide that Most Of What You Think You Know Is Wrong, then you have some work ahead of you. If these various authorities and well-documented sources of primary material are faked, then what *isnt'* faked? How do you know that the stuff you've decided to believe is on the level? My usual answer to someone who tries to convince me of the 9/11 stuff, etc., is to lower my voice and say "Well, yeah, but that's just what they want you to think". It's a universal answer. You can't falsify it.
Too often, what happens is that someone chooses to believe the things that fit their worldview, and dismisses the stuff that doesn't. That's human nature, but scientific inquiry is alien to human nature. If you start in with the conspiratorial stuff, then you end up skipping through the fields of data and sources, picking a daisy here and a cherry there, until you've made a wonderful centerpiece out of little bits from all over the place. And you can end up telling yourself, "See, this must be real. Look at this wonderful thing I've assembled, all the parts fit together so well - how can it be anything other than true?" But beautiful sculptures can be made from all kinds of found objects. If you start by assuming your conclusion - they're covering something up! - then you can get there any of a million ways.
So try this thought experiment: how do you know that (Dr. Cassandra) isn't just a plant? A false flag? Someone who's been put out there to make his beliefs look silly and under-researched (because believe me, he does)? Could someone in the pay of the Mighty Conspiracy do a better job of bringing its opposition into disrepute? That's the problem with conspiratorial thinking: the rabbit hole has no bottom to it. I refuse to dive in.
So my correspondent and I agreed to disagree. He thinks that eventually I'll see the truth of some of his beliefs, which I very much doubt. And I have little to no hope that he'll ever accept any of mine. The points made above have naturally been made by many others who've examined conspiratorial thinking, and I don't see much of a way around them. When you get to the Vast Overarching Conspiracy level of some of these schemes, you really do wonder how the believers manage to function. It's only a short step to the sorts of worldviews depicted in Diane Kossy's compendium Kooks: A Guide to the Outer Limits of Human Belief, which is worth a look if you've never encountered 100-proof paranoia before.
+ TrackBacks (0) | Category: Snake Oil
So Amgen's bid for Onyx look like it's going through, and the reaction of John Carroll at FiercePharma was to tweet "Expect big layoffs soon". He took some flak for being such a downer, but he's right, as far as I can see. Amgen isn't buying Onyx for their research staff, or any of their people at all. As that Bloomberg story linked to above has it, "Amgen to Buy Onyx for $10.4 Billion to Gain Cancer Drug".
That's Kyprolis (carfilzomib), their proteasome inhibitor, and that's all they need from Onyx, who bought the compound anyway when they acquired Proteolix a few years ago. So since I don't want to be a downer either, especially on Monday morning, I'd be interested to see if anyone can make another case. . .
+ TrackBacks (0) | Category: Business and Markets | Cancer
August 23, 2013
And after that mention of CEO pay, this sounds like a good time to link to this article from Nature Biotechnology. If you've ever been curious about why different companies pay out in stock options and/or restricted stock, this will satisfy your curiosity and more. A big part of the answer, you will not be surprised to hear, is the tax code, and if you're someone getting these kinds of compensation, you need to know some tax angles from your end, too.
And, of course, the type of award that works out best for the company doesn't always work out best for the grantee. Likewise, not every grantee will be best served by a single kind of award - it all depends on what you're trying to reward:
Although stock options continue to be a popular employee incentive device, in the past few years their advantages have been diminished through accounting and tax law changes, whereas their shortcomings have become more apparent in the biotech sector—in which a consistently growing stock price is far from assured, or even likely. As a consequence, biotech firms are moving away from an exclusive reliance on stock options and instead are using a mix of equity-based incentives, most commonly a combination of stock options and performance-based stock units.
From the perspective of a founder or other employee, the shift to a combination of stock options and some form of restricted stock or stock units should be welcome, making it less likely that the employee's awards will have no value at all. Unlike the corporate employer, an employee would prefer that restricted stock or stock units not be subject to performance conditions. . .
Definitely worth a look if you haven't thought about these details. After a good long stare, though, you may decide that the best course is to pay someone else to think about these things for you (!)
+ TrackBacks (0) | Category: Business and Markets
In the case of Microsoft's Steve Ballmer, the stock market appears to be saying "About minus 18 billion dollars". As Alex Tabarrok notes here, that sort of puts average CEO compensation in perspective. . .do we have some bigwigs in this business who could do as much for their shareholders by following Ballmer's example?
+ TrackBacks (0) | Category: Business and Markets
We chemists have always looked at the chemical machinery of living systems with a sense of awe. A billion years of ruthless pruning (work, or die) have left us with some bizarrely efficient molecular catalysts, the enzymes that casually make and break bonds with a grace and elegance that our own techniques have trouble even approaching. The systems around DNA replication are particularly interesting, since that's one of the parts you'd expect to be under the most selection pressure (every time a cell divides, things had better work).
But we're not content with just standing around envying the polymerase chain reaction and all the rest of the machinery. Over the years, we've tried to borrow whatever we can for our own purposes - these tools are so powerful that we can't resist finding ways to do organic chemistry with them. I've got a particular weakness for these sorts of ideas myself, and I keep a large folder of papers (electronic, these days) on the subject.
So I was interested to have a reader send along this work, which I'd missed when it came out on PLOSONE. It's from Pehr Harbury's group at Stanford, and it's in the DNA-linked-small-molecule category (which I've written about, in other cases, here and here). Here's a good look at the pluses and minuses of this idea:
However, with increasing library complexity, the task of identifying useful ligands (the ‘‘needles in the haystack’’) has become increasingly difficult. In favorable cases, a bulk selection for binding to a target can enrich a ligand from non-ligands by about 1000-fold. Given a starting library of 1010 to 1015 different compounds, an enriched ligand will be present at only 1 part in 107 to 1 part in 1012. Confidently detecting such rare molecules is hard, even with the application of next-generation sequencing techniques. The problem is exacerbated when biologically-relevant selections with fold-enrichments much smaller than 1000-fold are utilized.
Ideally, it would be possible to evolve small-molecule ligands out of DNA-linked chemical libraries in exactly the same way that biopolymer ligands are evolved from nucleic acid and protein libraries. In vitro evolution techniques overcome the ‘‘needle in the haystack’’ problem because they utilize multiple rounds of selection, reproductive amplification and library re-synthesis. Repetition provides unbounded fold-enrichments, even for inherently noisy selections. However, repetition also requires populations that can self-replicate.
That it does, and that's really the Holy Grail of evolution-linked organic synthesis - being able to harness the whole process. In this sort of system, we're talking about using the DNA itself as a physical prod for chemical reactivity. That's also been a hot field, and I've written about some examples from the Liu lab at Harvard here, here, and here. But in this case, the DNA chemistry is being done with all the other enzymatic machinery in place:
The DNA brings an incipient small molecule and suitable chemical building blocks into physical proximity and induces covalent bond formation between them. In so doing, the naked DNA functions as a gene: it orchestrates the assembly of a corresponding small molecule gene product. DNA genes that program highly fit small molecules can be enriched by selection, replicated by PCR, and then re-translated into DNA-linked chemical progeny. Whereas the Lerner-Brenner style DNA-linked small-molecule libraries are sterile and can only be subjected to selective pressure over one generation, DNA-programmed libraries produce many generations of offspring suitable for breeding.
The scheme below shows how this looks. You take a wide variety of DNA sequences, and have them each attached to some small-molecule handle (like a primary amine). You then partition these out into groups by using resins that are derivatized with oligonucleotide sequences, and you plate these out into 384-well format. While the DNA end is stuck to the resin, you do chemistry on the amine end (and the resin attachment lets you get away with stuff that would normally not work if the whole DNA-attached thing had to be in solution). You put a different reacting partner in each of the 384 wells, just like in the good ol' combichem split/pool days, just with DNA as the physical separation mechanism.
In this case, the group used 240-base-pair DNA sequences, two hundred seventeen billion of them. That sentence is where you really step off the edge into molecular biology, because without its tools, generating that many different species, efficiently and in usable form, is pretty much out of the question with current technology. That's five different coding sequences, in their scheme, with 384 different ones in each of the first four (designated A through D), and ten in the last one, E. How diverse was this, really? Get ready for more molecular biology tools:
We determined the sequence of 4.6 million distinct genes from the assembled library to characterize how well it covered ‘‘genetic space’’. Ninety-seven percent of the gene sequences occurred only once (the mean sequence count was 1.03), and the most abundant gene sequence occurred one hundred times. Every possible codon was observed at each coding position. Codon usage, however, deviated significantly from an expectation of random sampling with equal probability. The codon usage histograms followed a log-normal distribution, with one standard deviation in log- likelihood corresponding to two-to-three fold differences in codon frequency. Importantly, no correlation existed between codon identities at any pair of coding positions. Thus, the likelihood of any particular gene sequence can be well approxi- mated by the product of the likelihoods of its constituent codons. Based on this approximation, 36% of all possible genes would be present at 100 copies or more in a 10 picomole aliquot of library material, 78% of the genes would be present at 10 copies or more, and 4% of the genes would be absent. A typical selection experiment (10 picomoles of starting material) would thus sample most of the attainable diversity.
The group had done something similar before with 80-codon DNA sequences, but this system has 1546, which is a different beast. But it seems to work pretty well. Control experiments showed that the hybridization specificity remained high, and that the micro/meso fluidic platform being used could return products with high yield. A test run also gave them confidence in the system: they set up a run with all the codons except one specific dropout (C37), and also prepared a "short gene", containing the C37 codon, but lacking the whole D area (200 base pairs instead of 240). When they mixed that in with the drop-out library (in a ratio of 1 to 384), and split that out onto a C-codon-attaching array of beads. They then did the chemical step, attaching one peptoid piece onto all of them except the C37 binding well - that one got biotin hydrazide instead. Running the lot of them past streptavidin took the ratio of the C37-containing ones from 1:384 to something over 35:1, an enhancement of at least 13,000-fold. (Subcloning and sequencing of 20 isolates showed they all had the C37 short gene in them, as you'd expect).
They then set up a three-step coupling of peptoid building blocks on a specific codon sequence, and this returned very good yields and specificities. (They used a fluorescein-tagged gene and digested the product with PDE1 before analyzing them at each step, which ate the DNA tags off of them to facilitate detection). The door, then, would now seem to be open:
Exploration of large chemical spaces for molecules with novel and desired activities will continue to be a useful approach in academic studies and pharmaceutical investigations. Towards this end, DNA-programmed combinatorial chemistry facilitates a more rapid and efficient search process over a larger chemical space than does conventional high-throughput screening. However, for DNA-programmed combinatorial chemistry to be widely adopted, a high-fidelity, robust and general translation system must be available. This paper demonstrates a solution to that challenge.
The parallel chemical translation process described above is flexible. The devices and procedures are modular and can be used to divide a degenerate DNA population into a number of distinct sub-pools ranging from 1 to 384 at each step. This coding capacity opens the door for a wealth of chemical options and for the inclusion of diversity elements with widely varying size, hydrophobicity, charge, rigidity, aromaticity, and heteroatom content, allowing the search for ligands in a ‘‘hypothesis-free’’ fashion. Alternatively, the capacity can be used to elaborate a variety of subtle changes to a known compound and exhaustively probe structure-activity relationships. In this case, some elements in a synthetic scheme can be diversified while others are conserved (for example, chemical elements known to have a particular structural or electrostatic constraint, modular chemical fragments that independently bind to a protein target, metal chelating functional groups, fluorophores). By facilitating the synthesis and testing of varied chemical collections, the tools and methods reported here should accelerate the application of ‘‘designer’’ small molecules to problems in basic science, industrial chemistry and medicine.
Anyone want to step through? If GSK is getting some of their DNA-coded screening to work (or at least telling us about the examples that did?), could this be a useful platform as well? Thoughts welcome in the comments.
+ TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays
August 22, 2013
Here's a new paper from Michael Shultz of Novartis, who is trying to cut through the mass of metrics for new compounds. I cannot resist quoting his opening paragraph, but I do not have a spare two hours to add all the links:
Approximately 15 years ago Lipinski et al. published their seminal work linking molecular properties with oral absorption.1 Since this ‘Big Bang’ of physical property analysis, the universe of parameters, rules and optimization metrics has been expanding at an ever increasing rate (Figure 1).2 Relationships with molecular weight (MW), lipophilicity,3 and 4 ionization state,5 pKa, molecular volume and total polar surface area have been examined.6 Aromatic rings,7 and 8 oxygen atoms, nitrogen atoms, sp3 carbon atoms,9 chiral atoms,9 non-hydrogen atoms, aromatic versus non-hydrogen atoms,10 aromatic atoms minus sp3 carbon atoms,6 and 11 hydrogen bond donors, hydrogen bond acceptors and rotatable bonds12 have been counted and correlated.13 In addition to the rules of five came the rules of 4/40014 and 3/75.15 Medicinal chemists can choose from composite parameters (or efficiency indices) such as ligand efficiency (LE),16 group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE17/LLE),18 ligand lipophilicity index (LLEAT),19 ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LE_scale),20 percentage efficiency index (PEI),21 size independent ligand efficiency (SILE), binding efficiency index (BEI) or surface binding efficiency index (SEI)22 and composite parameters are even now being used in combination.23 Efficiency of binding kinetics has recently been introduced.24 A new trend of anthropomorphizing molecular optimization has occurred as molecular ‘addictions’ and ‘obesity’ have been identified.25 To help medicinal chemists there are guideposts,21 rules of thumb,14 and 26 a property forecast index,27 graphical representations of properties28 such as efficiency maps, atlases,29 ChemGPS,30 traffic lights,31 radar plots,32 Craig plots,33 flower plots,34 egg plots,35 time series plots,36 oral bioavailability graphs,37 face diagrams,28 spider diagrams,38 the golden triangle39 and the golden ratio.40
He must have enjoyed writing that one, if not tracking down all the references. This paper is valuable right from the start just for having gathered all this into one place! But as you read on, you find that he's not too happy with many of these metrics - and since there's no way that they can all be equally correct, or equally useful, he sets himself the task of figuring out which ones we can discard. The last reference in the quoted section below is to the famous "Can a biologist fix a radio?" paper:
While individual composite parameters have been developed to address specific relationships between properties and structural features (e.g. solubility and aromatic ring count) the benefit may be outweighed by the contradictions that arise from utilizing several indices at once or the complexity of adopting and abandoning various metrics depending on the stage of molecular optimization. The average medicinal chemist can be overwhelmed by the ‘analysis fatigue’ that this plethora of new and contradictory tools, rules and visualizations now provide, especially when combined with the increasing number of safety, off-target, physicochemical property and ADME data acquired during optimization efforts. Decision making is impeded when evaluating information that is wrong or excessive and thus should be limited to the absolute minimum and most relevant available.
As Lazebnik described, sometimes the more facts we learn, the less we understand.
And he discards quite a few. All the equations that involve taking the log of potency and dividing by the heavy atom count (HAC), etc., are playing rather loose with the math:
To be valid, LE must remain constant for each heavy atom that changes potency 10-fold. This is not the case as a 15 HAC compound with a pIC50 of 3 does not have the same LE as a 16 HAC compound with a pIC50 of 4 (ΔpIC50 = 1, ΔHAC = 1, ΔLE = 0.07). A 10-fold change in potency per heavy atom does not result in constant LE as defined by Hopkins, nor will it result in a constant SILE, FQ or LLEAT values. These metrics do not mathematically normalize size or potency because they violate the quotient rule of logarithms. To obey this rule and be a valid mathematical function HAC would subtracted from pIC50 and rendered independent of size and reference potency.
Note that he's not recommending that last operation as a guideline, either. Another conceptual problem with plain heavy atom counting is that it treats all atoms the same, but that's clearly an oversimplification. But dividing by some form of molecular weight is an oversimplification, too: a nitrogen differs from an oxygen by a lot more than that 1 mass unit. (This topic came up here a little while back). But oversimplified or not - heck, mathematically valid or not - the question is whether these things help out enough when used as metrics in the real world. And Shultz would argue that they don't. Keeping LE the same (or even raising it) is supposed to be the sign of a successful optimization, but in practice, LE usually degrades. His take on this is that "Since lower ligand efficiency is indicative of both higher and lower probabilities of success (two mutually exclusive states) LE can be invalidated by not correlating with successful optimization."
I think that's too much of a leap - because successful drug programs have had their LE go down during the process, that doesn't mean that this was a necessary condition, or that they should have been aiming for that. Perhaps things would have been even better if they hadn't gone down (although I realize that arguing from things that didn't happen doesn't have much logical force). Try looking at it this way: a large number of successful drug programs have had someone high up in management trying to kill them along the way, as have (obviously) most of the unsuccessful ones. That would mean that upper management decisions to kill a program are also indicative of both higher and lower probabilities of success, and can thus be invalidated, too. Actually, he might be on to something there.
Shultz, though, finds that he's not able to invalidate LipE (or LLE), variously known as ligand-lipophilicity efficiency or lipophilic ligand efficiency. That's p(IC50) - logP, which at least follows the way that logarithms of quotients are supposed to work. And it also has been shown to improve during known drug optimization campaigns. The paper has a thought experiment, on some hypothetical compounds, as well as some data from a tankyrase inhibitor series that seem to show the LipE behave more rationally than other metrics (which sometimes start pointing in opposite directions).
I found the chart below to be quite interesting. It uses the cLogP data from Paul Leeson and Brian Springthorpe's original LLE paper (linked in the above paragraph) to show what change in potency you would expect when you change a hydrogen in your molecule to one of the groups shown if you're going to maintain a constant LipE value. So while hydrophobic groups tend to make things more potent, this puts a number on it. A t-butyl, for example, should make things about 50-fold more potent if it's going to pull its weight as a ball of grease. (Note that we're not talking about effects on PK and tox here, just sheer potency - if you play this game, though, you'd better be prepared to keep an eye on things downstream).
On the other end of the scale, a methoxy should, in theory, cut your potency roughly in half. If it doesn't, that's a good sign. A morpholine should be three or four times worse, and if it isn't, then it's found something at least marginally useful to do in your compound's binding site. What we're measuring here is the partitioning between your compound wanting to be in solution, and wanting to be in the binding site. More specifically, since logP is in the equation, we're looking at the difference in the partitioning of your compound between octanol and water, versus its partitioning between the target protein and water. I think we can all agree that we'd rather have compounds that bind because they like something about the active site, rather than just fleeing the solution phase.
So in light of this paper, I'm rethinking my ligand-efficiency metrics. I'm still grappling with how LipE performs down at the fragment end of the molecular weight scale, and would be glad to hear thoughts on that. But Shultz's paper, if it can get us to toss out a lot of the proposed metrics already in the literature, will have done us all a service.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Pharmacokinetics
August 21, 2013
This is just what people working in R&D at Merck don't want to see. According to FiercePharma, a prominent analyst is urging the company to get its finances in line with its competitors. . .by cutting R&D.
Seamus Fernandez at Leerink Swann says that Merck should reduce their expenditures in that area by around a billion dollars, which is at least 8 times deeper than the new R&D head, Roger Perlmutter, has talked about. Here's the whole analysis, which includes this:
We believe a major restructuring at MRK is necessary; movement here likely would be well-received. As pressure builds on MRK mgmt to: (1) improve R&D productivity, (2) maintain top-tier operating margins, and (3) continue returning cash to shareholders, we believe a deep restructuring should be seriously considered in light of the relatively lackluster 2013 top-line performance, disappointing Ph III/ registrational pipeline evolution (odanacatib, suvorexant, Bridion U.S.), and overall industry challenges. We estimate that every $1B reduction of operating expenses would add $0.25/share to MRK's bottom line, and would bring MRK's absolute R&D spend closer to PFE's (MP) ~$6.5B but still be in line with several of its diversified competitors' spend at ~14% of sales. A 10% cut in overall operating expenses would equate to ~$2B of annual cost reductions.
If you read the rest, you'll see that the reasons Fernandez has for optimism are all on the financial side of the company: how much cash the company has on hand, its opportunities to do things like sell off animal health, sell off consumer care, and of course its opportunities to cut costs. There's absolutely nothing in there about the company doing better because of anything that's coming along in the pipeline. No, all that drug stuff is in the negative category: doubts about the big IMPROVE-IT trial in cardiovascular, competition for the existing drugs, regulatory uncertainties, and so on. Nothing but trouble.
At this point, it would be easy for me to get up on the lab bench and make a rabble-rousing speech about how short-sighted all this is, how Merck is a research-driven company, not some sort of bank or insurance operation, and so on. I'm tempted. But these points, while definitely not invalid, don't address whether Fernandez might be right about Merck's current situation. He knows as well as anyone that the only reason Merck got to be this size is by discovering and selling valuable medicines, and he knows that this is still the company's core business. Those ideas about selling off animal health and consumer products? Those are supposed to bring in more money to discover drugs. If Merck doesn't do that, they're toast; that's the engine of the whole company.
OK, so why cut R&D if that's the whole reason the company exists? Here's where we get down to it. Fernandez's take is that Merck is spending too much and getting too little back for it. He's not suggesting the of chopping most of the R&D department to make the bottom line suddenly bloom (for a while). This is more of a gas-mileage problem. In this view, Merck's engine is R&D, for sure, but that engine is burning too much fuel (money) while covering too little distance in the process. To stick with the engine analogy, does it really need twelve cylinders? Does it have to be as heavy and humungous as it is? After all, others are burning similar amounts of fuel (or less) and making more progress.
What Fernandez is saying to Roger Perlmutter, as I see it, is "Throw us a bone, Roger. Show that you seriously realize that things have been going wrong at Merck, and that you understand that the company's gotten in the habit of spending too much money. Show us in the only way that you can, because just telling us that you're going to do things better and smarter isn't enough. Everyone says that, no matter how dumb they are. Even if you really can follow through on that better/smarter stuff, no one will see the results of it for years. Show us something that we can see happening right now."
The question then is whether this sort of cutting and re-engineering can be done without disrupting Merck's R&D even more, and now that is a tough question, and I'm glad I'm not on the hook to provide an answer. The problem is, a company can cut back like this, but still keep the same inefficiencies and bad processes that got it into trouble in the first place. That's what happens when a company lops off a whole division: "We still suck, but now we suck on a smaller scale". It's doing the same not-so-good stuff as it always did, but in fewer areas, and might therefore be even less likely to make anything of it. A company can also cut back in ways that might, objectively, be the right thing to do, but nonetheless end up disorganizing and demoralizing the remaining workers so much that things end up worse than before in that way, too.
These are the downside risks of taking the cut-back-your-expenditures advice, and they're very real. Not taking the advice has real risks, too, naturally. Running a company that size, or its R&D department, is not a low-pressure job with easy decisions. We'll see which way Perlmutter goes, and how he makes his case. Keep in mind, too, that these issues do not apply only to Merck. Not at all.
+ TrackBacks (0) | Category: Business and Markets
August 20, 2013
Here's a paper that asks whether GPCRs are still a source of new targets. As you might guess, the answer is "Yes, indeed". (Here's a background post on this area from a few years ago, and here's my most recent look at the area).
It's been a famously productive field, but the distribution is pretty skewed:
From a total of 1479 underlying targets for the action of 1663 drugs, 109 (7%) were GPCRs or GPCR related (e.g., receptor-activity modifying proteins or RAMPs). This immediately reveals an issue: 26% of drugs target GPCRs, but they account for only 7% of the underlying targets. The results are heavily skewed by certain receptors that have far more than their “fair share” of drugs. The most commonly targeted receptors are as follows: histamine H1 (77 occurrences), α1A adrenergic (73), muscarinic M1 (72), dopamine D2 (62), muscarinic M2 (60), 5HT2a (59), α2A adrenergic (56), and muscarinic M3 (55)—notably, these are all aminergic GPCRs. Even the calculation that the available drugs exert their effects via 109 GPCR or GPCR-related targets is almost certainly an overestimate since it includes a fair proportion where there are only a very small number of active agents, and they all have a pharmacological action that is “unknown”; in truth, we have probably yet to discover an agent with a compelling activity at the target in question, let alone one with exactly the right pharmacology and appropriately tuned pharmacokinetics (PK), pharmacodynamics (PD), and selectivity to give clinical efficacy for our disease of choice. A prime example of this would be the eight metabotropic (mGluR) receptors, many of which have only been “drugged” according to this analysis due to the availability of the endogenous ligand (L-glutamic acid) as an approved nutraceutical. There are also a considerable number of targets for which the only known agents are peptides, rather than small molecules. . .
Of course, since we're dealing with cell-surface receptors, peptides (and full-sized proteins) have a better shot at becoming drugs in this space.
Of the 437 drugs found to target GPCRs, 21 are classified as “biotech” (i.e., biopharmaceuticals) with the rest as “small molecules.” However, that definition seems rather generous given that the molecular weight (MW) of the “small molecules” extends as high as 1623. Using a fairly modest threshold of MW <600 suggests that ~387 are more truly small molecules and ~50 are non–small molecules, being roughly an 80:20 split. Pursuing the 20%, while not being novel targets/mechanisms, could still provide important new oral/small-molecule medications with the comfort of excellent existing clinical validation. . .
The paper goes on to mention many other possible modes for drug action - allosteric modulators, GPCR homo- and heterodimerization, other GPCR-protein interactions, inverse agonists and the like, alternative signaling pathways other than the canonical G-proteins, and more. It's safe to say that all this will keep up busy for a long time to come, although working up reliable assays for some of these things is no small matter.
+ TrackBacks (0) | Category: Biological News | Drug Assays
It's worth noting, on the business end of things, that we seem to be in a boom period for biotech/small pharma IPOs. I don't think anyone saw that coming, but these things take on momentum of their own. Hardly anyone went public for a few years once the financial crisis hit in 2007/2008. Then last year there were eleven new public companies, the most in quite a while. This year, though, there have been 29 (according to this piece in FierceBiotech), with eight of them since just the end of June.
That's pretty lively. And while some of this can be explained as a holdover from companies that would have gone public earlier, under less trying conditions, you'd have to think that we're getting near the bottom of the sack by now. Whatever gets pulled up at this point has a greater likelihood of having all kinds of stuff stuck to it, and it might not be in good enough condition for your portfolio to consume it. Soon we'll probably be in the part of the cycle where good companies, who would have happily launched themselves into the market a few months before, get whipsawed by a closing IPO window. If you think of a large flock of birds wheeling around in the sky, unable to quite decide which tree to land on, or whether to land at all, you'll have a pretty good mental picture of the market.
+ TrackBacks (0) | Category: Business and Markets
August 19, 2013
In the comments thread to this post, Munos has this to say:
Innovation cannot thrive upon law and order. Sooner or later, HR folks will need to come to grips with this. Innovators (the real ones) are rebels at heart. They are not interested in growing and nurturing existing markets beccause they want to obliterate and replace them with something better. They don't want competitive advantage from greater efficiency, because they want to change the game. They don't want to optimize, they want to disrupt and dominate the new markets they are creating. The most damaging legacy of the process-minded CEOs who brought us the innovation crisis has been to purge disrupters from the ranks of pharma. Yes, they are tough to manage, but every innovative company needs them, and must create a climate that allows them to thrive. . .
I wanted to bring that up to the front page, because I enjoy hearing things like this, and I hope that they're true.
+ TrackBacks (0) | Category: Who Discovers and Why
A reader sends along this account of some speakers at last year's investment symposium from Agora Financial. One of the speakers was Juan Enriquez, and I thought that readers here might be interested in his perspective.
First, the facts. According to Enriquez:
Today, it costs 100,000 times less than it once did to create a three-dimensional map of a disease-causing protein
There are about 300 times more of these disease proteins in databases now than in times past
The number of drug-like chemicals per researcher has increased 800 times
The cost to test a drug versus a protein has decreased ten-fold
The technology to conduct these tests has gotten much quicker
Now here’s Enriquez’s simple question:
"Given all these advances, why haven’t we cured cancer yet? Why haven’t we cured Alzheimer’s? Why haven’t we cured Parkinson’s?"
The answer likely lies in the bloated process and downright hostile-to-innovation climate for FDA drug approvals in this day and age...
According to Enriquez, this climate has gotten so bad that major pharmaceuticals companies have begun shifting their primary focus from R&D of new drugs to increased marketing of existing drugs — and mergers and acquisitions.
I have a problem with this point of view, assuming that it's been reported correctly. I'll interpret this as makes-a-good-speech exaggeration, but Enriquez himself has most certainly been around enough to realize that the advances that he speaks of are not, by themselves, enough to lead to a shower of new therapies. That's a theme that has come up on this site several times, as well it might. I continue to think that if you could climb in a time machine and go back to, say, 1980 with these kinds of numbers (genomes sequenced, genes annotated, proteins with solved structures, biochemical pathways identified, etc.), that everyone would assume that we'd be further along, medically, than we really are by now. Surely that sort of detailed knowledge would have solved some of the major problems? More specifically, I become more sure every year that drug discovery groups of that era might be especially taken aback at how the new era of target-based molecular-biology-driven drug research has ended up working out: as a much harder proposition than many might have thought.
So it's a little disturbing to see the line taken above. In effect, it's saying that yes, all these advances have been enough to release a flood of new therapies, which means that there must be something holding them back (in this case, apparently, the FDA). The thing is, the FDA probably has slowed things down - in fact, I'd say it almost certainly has. That's part of their job, insofar as the slowdowns are in the cause of safety.
And now we enter the arguing zone. On the one side, you have the reducio ad absurdum argument that yes, we'd have a lot more things figured out if we could just go directly into humans with our drug candidates instead of into mice, so why don't we just? (That's certainly true, as far as it goes. We would surely kill off a fair number of people doing things that way, as the price of progress, but (more) progress there would almost certainly be. But no one - no one outside of North Korea, anyway - is seriously proposing this style of drug discovery. Someone who agrees with Enriquez's position would regard it as a ridiculous misperception of what they're calling for, designed to make them look stupid and heartless.
But I think that Enriquez's speech, as reported, is the ad absurdum in the other direction. The idea that the FDA is the whole problem is also an oversimplification. In most of these areas, the explosion of knowledge laid out above has not yet let to an explosion of understanding. You'd get the idea that there was this big region of unexplored stuff, and now we've pretty much explored it, so we should really be ready to get things done. But the reality, as I see it, as that there was this big region of unexplored stuff, and we set into to explore it, and found out that it was far bigger than we'd even dreamed. It's easy to get your scale of measurement wrong. It's quite similar to the way that humanity didn't realize just how large the Earth was, then how small it was compared to the solar system (and how off-center), and how non-special our sun was in the immensity of the galaxy, not to mention how many other galaxies there are and how far away they lie. Biology and biochemistry aren't quite on that scale of immensity, but they're plenty big enough.
Now, when I mentioned that we'd surely have killed off more people by doing drug research by the more direct routes, the reply is that we've been killing people off by moving too slowly as well. That's a valid argument. But under the current system, we choose to have people die passively, through mechanisms of disease that are already operating, while under the full-speed-ahead approaches, we might lower that number by instead killing off some others in a more active manner. It's typically human of us to choose the former strategy. The big questions are how many people would die in each category as we moved up and down the range between the two extremes, and what level of each casualty count we'd find "acceptable".
So while it's not crazy to say that we should be less risk-averse, I think it is silly to say that the FDA is the only (or even main) thing holding us back. I think that this has a tendency to bring on both unnecessary anger directed at the agency, and raise unfulfillable hopes in regards to what the industry can do in the near term. Neither of those seem useful to me.
Full disclosure - I've met Enriquez, three years ago at SciFoo. I'd be glad to give him a spot to amplify and extend his remarks if he'd like one.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History | Regulatory Affairs
Here's a question that comes up once in a while in my e-mail. I've always worked for companies that are large enough to do all of their own high-throughput screening (with some exceptions for when we've tried out some technology that we don't have in-house, or not yet). But there are many smaller companies that contract out for some or all of their screening, and sometimes for some assay development as well beforehand.
So there are, naturally, plenty of third parties who will run screens for you, against their own compound collections or against something you bring them. A reader was just asking me if I had any favorites in this area myself, but I haven't had enough call to use these folks to have a useful opinion. So I think it would be worth hearing about experiences with these shops, good and bad. Keep the specific recommendations recent, if possible, but general advice and What Not to Do warnings are timeless. Any thoughts?
+ TrackBacks (0) | Category: Drug Assays
August 16, 2013
Structural biology needs no introduction for people doing drug discovery. This wasn't always so. Drugs were discovered back in the days when people used to argue about whether those "receptor" thingies were real objects (as opposed to useful conceptual shorthand), and before anyone had any idea of what an enzyme's active site might look like. And even today, there are targets, and whole classes of targets, for which we can't get enough structural information to help us out much.
But when you can get it, structure can be a wonderful thing. X-ray crystallography of proteins, and protein-ligand complexes has revealed so much useful information that it's hard to know where to start. It's not the magic wand - you can't look at an empty binding site and just design something right at your desk that'll be a potent ligand right off the bat. And you can't look at a series of ligand-bound structures and say which one is the most potent, not in most situations, anyway. But you still learn things from X-ray structures that you could never have known otherwise.
It's not the only game in town, either. NMR structures are very useful, although the X-ray ones can be easier to get, especially in these days of automated synchroton beamlines and powerful number-crunching. But what if your protein doesn't crystallize? And what if there are things happening in solution that you'd never pick up on from the crystallized form? You're not going to watch your protein rearrange into a new ligand-bound conformation with X-ray crystallography, that's for sure. No, even though NMR structures can be a pain to get, and have to be carefully interpreted, they'll also show you things you'd never had seen.
And there are more exotic methods. Earlier this summer, there was a startling report of a structure of the HIV surface proteins gp120 and gp41 obtained through cryogenic electron microscopy. This is a very important and very challenging field to work in. What you've got there is a membrane-bound protein-protein interaction, which is just the sort of thing that the other major structure-determination techniques can't handle well. At the same time, though, the number of important proteins involved in this sort of thing is almost beyond listing. Cryo-EM, since it observes the native proteins in their natural environment, without tags or stains, has a lot of potential, but it's been extremely hard to get the sort of resolution with it that's needed on such targets.
Joseph Sodroski's group at Harvard, longtime workers in this area, published their 6-angstrom-resolution structure of the protein complex in PNAS. But according to this new article in Science, the work has been an absolute lightning rod ever since it appeared. Many other structural biologists think that the paper is so flawed that it never should have seen print. No, I'm not exaggerating:
Several respected HIV/AIDS researchers are wowed by the work. But others—structural biologists in particular—assert that the paper is too good to be true and is more likely fantasy than fantastic. "That paper is complete rubbish," charges Richard Henderson, an electron microscopy pioneer at the MRC Laboratory of Molecular Biology in Cambridge, U.K. "It has no redeeming features whatsoever."
. . .Most of the structural biologists and HIV/AIDS researchers Science spoke with, including several reviewers, did not want to speak on the record because of their close relations with Sodroski or fear that they'd be seen as competitors griping—and some indeed are competitors. Two main criticisms emerged. Structural biologists are convinced that Sodroski's group, for technical reasons, could not have obtained a 6-Å resolution structure with the type of microscope they used. The second concern is even more disturbing: They solved the structure of a phantom molecule, not the trimer.
Cryo-EM is an art form. You have to freeze your samples in an aqueous system, but without making ice. The crystals of normal ice formation will do unsightly things to biological samples, on both the macro and micro levels, so you have to form "vitreous ice", a glassy amorphous form of frozen water, which is odd enough that until the 1980s many people considered it impossible. Once you've got your protein particles in this matrix, though, you can't just blast away at full power with your electron beam, because that will also tear things up. You have to take a huge number of runs at lower power, and analyze them through statistical techniques. The Sodolski HIV structure, for example, is the product of 670,000 single-particle images.
But its critics say that it's also the product of wishful thinking.:
The essential problem, they contend, is that Sodroski and Mao "aligned" their trimers to lower-resolution images published before, aiming to refine what was known. This is a popular cryo-EM technique but requires convincing evidence that the particles are there in the first place and rigorous tests to ensure that any improvements are real and not the result of simply finding a spurious agreement with random noise. "They should have done lots of controls that they didn't do," (Sriram) Subramaniam asserts. In an oft-cited experiment that aligns 1000 computer-generated images of white noise to a picture of Albert Einstein sticking out his tongue, the resulting image still clearly shows the famous physicist. "You get a beautiful picture of Albert Einstein out of nothing," Henderson says. "That's exactly what Sodroski and Mao have done. They've taken a previously published structure and put atoms in and gone down into a hole." Sodroski and Mao declined to address specific criticisms about their studies.
Well, they decline to answer them in response to a news item in Science. They've indicated a willingness to take on all comers in the peer-reviewed literature, but otherwise, in print, they're doing the we-stand-by-our-results-no-comment thing. Sodroski himself, with his level of experience in the field, seems ready to defend this paper vigorously, but there seem to be plenty of others willing to attack. We'll have to see how this plays out in the coming months - I'll update as things develop.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico | Infectious Diseases
August 15, 2013
I haven't written much about Mannkind recently. This has been a long, long, expensive saga to develop an inhaled-insulin delivery system (Afrezza), which is an idea that all by itself has seems to have swallowed several billion dollars and never given anything back yet. (That link above will send you to some of the story, and this one will tell you something about the disastrous failure of the only inhaled insulin to reach the market so far).
In 2011, Mannkind looked as if they were circling the drain. But (as has been the case many times before), more money was heaved into what might still turn out to be an incinerator, and they kept going. Just in the last few days, they've released another batch of Phase III data, which looked positive. You can see from the year-to-date stock chart that people have been anticipating this, which might account for the way that MNKD hasn't exactly taken off on the news. The stocked jumped at the open yesterday, then spent the rest of the day wandering down, and opened today right back where it was before the news came out.
People might be worried about possible effects on lung function, which show up in the data (FEV1 as well as a side effect of coughing). But there are potentially even bigger concerns in the number for HbA1c and fasting glucose. A closer look at the data shows that Mannkind's product may not have clearly established itself versus the injected-insulin competition. As that FiercePharma story says, this might not keep the product from being approved, but it could give it a rough time in the marketplace (and give Mannkind a rough time finding a big partner).
I wonder if there are any investors - other than Al Mann - who have stuck with this company all the way? If so, I wonder what effect that's had on their well-being? It has been a long, bizarre ride, and no one knows how many more curves and washed-out bridges might still be out there.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity
Here's a publication from Aileron Therapeutics on their stapled-peptide efforts against MDM2/p53 for cancer. (I wrote about that target here, so you can check out the links in that post for background). This compound (ATSP-7041) goes after both MDM2 and MDMX, activating the suppressed p53 pathway, and it seems to do a good job of it. The company's been talking about these results at conferences, but this is the official publication of all that data.
Stapled peptides as a class of potential drugs have been the subject of controversy, but this one is heading towards the clinic, by all accounts. There are several other compounds out there in the same MDM2 space, though, so it'll be interesting to see how they all fare in the real world. And it's also worth noting that a good number of the people on this PNAS paper may well have been let go by Aileron in the last few months. . .
+ TrackBacks (0) | Category: Cancer
A longtime reader sent along this article from the journal Technological Forecasting and Social Change, which I'll freely admit never having spent much time with before. It's from a team of European researchers, and it's titled "Big Pharma, little science? A bibliometric perspective on Big Pharma's R&D decline".
What they've done is examine the publication record for fifteen of the largest drug companies from 1995 to 2009. They start off by going into the reasons why this approach has to be done carefully, since publications from industrial labs are produced (and not produced) for a variety of different reasons. But in the end:
Given all these limitations, we conclude that the analysis of publications does not in itself reflect the dynamics of Big Pharma's R&D. However, at the high level of aggregation we conduct this study (based on about 10,000 publications per year in total, with around 150 to 1500 publications per firm annually) it does raise interesting questions on R&D trends and firm strategies which then can be discussed in light of complementary quantitative evidence such as the trends revealed in studies using a variety of other metrics such as patents and, as well as statements made by firms in statutory filing and reports to investors.
So what did they find? In the 350 most-represented journals, publications from the big companies made up about 4% of the total content over those years (which comes out to over 10,000 papers). But this number has been dropping slightly, but steadily over the period. There are now about 9% few publications from Big Pharma than there were at the beginning of the period. But this effect might largely be explained by mergers and acquisitions over the same period - in every case, the new firm seems to publish fewer papers than the old ones did as a whole.
And here are the subject categories where those papers get published. The green nodes are topics such as pharmacology and molecular biology, and the blue ones are organic chemistry, medicinal chemistry, etc. These account for the bulk of the papers, along with clinical medicine.
The number of authors per publication has been steadily increasing (in fact, even faster than the other baseline for the journals as a whole), and the organizations-per-paper has been creeping up as well, also slightly faster than the baseline. The authors interpret this as an increase in collaboration in general, and note that it's even more pronounced in areas where Big Pharma's publication rate has grown from a small starting point, which (plausibly) they assign to bringing in outside expertise.
One striking result the paper picks up on is that the European labs have been in decline from a publication standpoint, but this seems to be mostly due to the UK, Switzerland, and France. Germany has held up better. Anyone who's been watching the industry since 1995 can assign names to the companies who have moved and closed certain research sites, which surely accounts for much of this effect. The influence of the US-based labs is clear:
Although in most of this analysis we adopt a Europe versus USA comparative perspective, a more careful analysis of the data reveals that European pharmaceutical companies are still remarkably national (or bi-national as a results of mergers in the case of AstraZeneca and Sanofi-Aventis). Outside their home countries, European firms have more publications from US-based labs than all their non-domestic European labs (i.e. Europe excluding the ‘home country’ of the firm). Such is the extent of the national base for collaborations that when co-authorships are mapped into organisational networks there are striking similarities to the natural geographic distribution of countries. . .with Big Pharma playing a notable role spanning the bibliometric equivalent of the ‘Atlantic’.
Here's one of the main conclusions from the trends the authors have picked up:
The move away from Open Science (sharing of knowledge through scientific conferences and publications) is compatible and consistent with the increasing importance of Open Innovation (increased sharing of knowledge — but not necessarily in the public domain). More specifically, Big Pharma is not merely retreating from publication activities but in doing so it is likely to substitute more general dissemination of research findings in publications for more exclusive direct sharing of knowledge with collaboration partners. Hence, the reduction in publication activities – next to R&D cuts and lab closures – is indicative of a shift in Big Pharma's knowledge sharing and dissemination strategies.
Putting this view in a broader historical perspective, one can interpret the retreat of Big Pharma from Open Science, as the recognition that science (unlike specific technological capabilities) was never a core competence of pharmaceutical firms and that publication activity required a lot of effort, often without generating the sort of value expected by shareholders. When there are alternative ways to share knowledge with partners, e.g. via Open Innovation agreements, these may be attractive. Indeed an associated benefit of this process may be that Big Pharma can shield itself from scrutiny in the public domain by shifting and distributing risk exposure to public research organisations and small biotech firms.
Whether the retreat from R&D and the focus on system integration are a desirable development depends on the belief in the capacities of Big Pharma to coordinate and integrate these activities for the public good. At this stage, one can only speculate. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Industry History | The Scientific Literature
August 14, 2013
So reports FiercePharma, quoting a story in the 21st Century Business Herald and the Shanghai Daily. A former Novartis sales rep says that she was "ordered" to bribe doctors to meet sales quotas. As Tracy Staton at Fierce puts it:
With Chinese authorities actively looking for any suggestion of corruption or bribery, we're likely to see more whistleblowers come forward and officials investigations follow. Though no one wants to admit it, payments to doctors and hospitals have been commonplace in China for years. The BBC reported this week that bribes are "routinely paid" by big drugmakers in China, citing 5 pharma reps working in China. One of those reps, however, said such payments are "rare," and "only very few people" get money from pharma.
The government previously tolerated the practice--or encouraged it, even, by putting doctors on paltry salaries. Now, officials are targeting foreign drugmakers for it, perhaps to make examples of them, perhaps to twist their arms for lower prices. Probably both.
+ TrackBacks (0) | Category: Business and Markets | The Dark Side
In the spirit of this article about Regeneron, here's a profile in Forbes of the company's George Yancopoulos and Leonard Schleifer. There are several interesting things in there, such as these lessons from Roy Vagelos (when he became Regeneron's chairman after retiring from Merck):
Lesson one: Stop betting on drugs when you won’t have any clues they work until you finish clinical trials. (That ruled out expanding into neuroscience–and is one of the main reasons other companies are abandoning ailments like Alzheimer’s.) Lesson two: Stop focusing only on the early stages of drug discovery and ignoring the later stages of human testing. It’s not enough to get it perfect in a petri dish. Regeneron became focused on mitigating the two reasons that drugs fail: Either the biology of the targeted disease is not understood or the drug does something that isn’t expected and causes side effects.
They're not the only ones thinking this way, of course, but if you're not, you're likely to run into big (and expensive) trouble.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
The technique of using engineered T cells against cancerous cells may be about to explode ever more than it has already. One of the hardest parts of getting this process scaled up has been the need to extract each patient's own T cells and reprogram them. But in a new report in Nature Biotechnology, a team at Sloan-Kettering shows that they can raise cells of this type from stem cells, which were themselves derived from T lymphocytes from another healthy donor. As The Scientist puts it:
Sadelain’s team isolated T cells from the peripheral blood of a healthy female donor and reprogrammed them into stem cells. The researchers then used disabled retroviruses to transfer to the stem cells the gene that codes for a chimeric antigen receptor (CAR) for the antigen CD19, a protein expressed by a different type of immune cell—B cells—that can turn malignant in some types of cancer, such as leukemia. The receptor for CD19 allows the T cells to track down and kill the rogue B cells. Finally, the researchers induced the CAR-modified stem cells to re-acquire many of their original T cell properties, and then replicated the cells 1,000-fold.
“By combining the CAR technology with the iPS technology, we can make T cells that recognize X, Y, or Z,” said Sadelain. “There’s flexibility here for redirecting their specificity towards anything that you want.”
You'll note the qualifications in that extract. The cells that are produced in this manner aren't quite the same as the ones you'd get by re-engineering a person's own T-cells. We may have to call them "T-like" cells or something, but in a mouse lymphoma model, they most certainly seem to do the job that you want them to. It's going to be harder to get these to the point of trying them out in humans, since they're a new variety of cell entirely, but (on the other hand) the patients you'd try this in are not long for this world and are, in many cases, understandably willing to try whatever might work.
Time to pull the camera back a bit. It's early yet, but these engineered T-cell approaches are very impressive. This work, if it holds up, will make them a great easier to implement. No doubt, at this moment, there are Great Specific Antigen Searches underway to see what other varieties of cancer might respond to this technique. And this, remember, is not the only immunological approach that's showing promise, although it must be the most dramatic.
So. . .we have to consider a real possibility that the whole cancer-therapy landscape could be reshaped over the next decade or two. Immunology has the potential to disrupt the whole field, which is fine by me, since it could certainly use some disruption, given the state of the art. Will we look back, though, and see an era where small-molecule therapies gave people an extra month here, an extra month there, followed by one where harnessing the immune system meant sweeping many forms of cancer off the board entirely? Speed the day, I'd say - but if you're working on those small-molecule therapies, you should keep up with these developments. It's not time to consider another line of research, not yet. But the chances of having to do this, at some point, are not zero. Not any more.
+ TrackBacks (0) | Category: Biological News | Cancer
If you haven't seen this, which goes into some very odd images from a paper in the ACS journal Nano Letters, then have a look. One's first impression is that this is a ridiculously crude Photoshop job, but an investigation appears to be underway to see if that's the case. . .
Update: the paper has now been withdrawn. The chemistry blogs get results!
+ TrackBacks (0) | Category: The Scientific Literature
August 13, 2013
I had a very interesting email the other day, and my reply to it started getting so long that I thought I'd just turn it into a blog post. Here's the question:
How long can we expect to keep finding new drugs?
By way of analogy, consider software development. In general, it's pretty hard to think of a computer-based task that you couldn't write a program to do, at least in principle. It may be expensive, or may be unreasonably slow, but physical possibility implies that a program exists to accomplish it.
Engineering is similar. If it's physically possible to do something, I can, in principle, build a machine to do it.
But it doesn't seem obvious that the same holds true for drug development. Something being physically possible (removing plaque from arteries, killing all cancerous cells, etc.) doesn't seem like it would guarantee that a drug will exist to accomplish it. No matter how much we'd like a drug for Alzheimer's, it's possible that there simply isn't one.
Is this accurate? Or is the language of chemistry expressive enough that if you can imagine a chemical solution to something, it (in principle) exists. (I don't really have a hard and fast definition of 'drug' here. Obviously all bets are off if your 'drug' is complicated enough to act like a living thing.)
And if it is accurate, what does that say about the long-term prospects for the drug industry? Is there any risk of "running out" of new drugs? Is drug discovery destined to be a stepping-stone until more advanced medical techniques are available?
That's an interesting philosophical point, and one that had never occurred to me in quite that way. I think that's because programming is much more of a branch of mathematics. If you've got a Universal Turing Machine and enough tape to run through it, then you can, in theory, run any program that ever could be run. And any process that can be broken down into handling ones and zeros can be the subject of a program, so the Church-Turing thesis would say that yes, you can calculate it.
But biochemistry is most definitely a different thing, and this is where a lot of people who come into it from the math/CS/engineering side run into trouble. There's a famous (infamous) essay called "Can A Biologist Fix A Radio" that illustrates the point well. The author actually has some good arguments, and some legitimate complaints about the way biochemistry/molecular biology has been approached. But I think that his thesis breaks down eventually, and I've been thinking on and off for years about just where that happens and how to explain what makes things go haywire. My best guess is algorithmic complexity. It's very hard to reduce the behavior of biochemical systems to mathematical formalism. The whole point of formal notation is to express things in the most compact and information-rich way possible, but trying to compress biochemistry in this manner doesn't give you much of an advantage, at least not in the ways we've tried to do it so far.
To get back to the question at hand, let's get philosophical. I'd say that at the most macro level, there are solutions to all the medical problems. After all, we have the example of people who don't have multiple sclerosis, who don't have malaria, who don't have diabetes or pancreatic cancer or what have you. We know that there are biochemical states where these things do not exist; the problem is then to get an individual patient's state back to that situation. Note that this argument does not apply to things like life extension, limb regeneration, and so on: we don't know if humans are capable of these things or not yet, even if there may be some good arguments to be made in their favor. But we know that there are human brains without Alzheimer's.
To move down a level from this, though, the next question is whether there are ways to put a patient's cells and organs back into a disease-free state. In some cases, I think that the answer has to be, for all practical purposes, "No". I tend to think that the later stages of Alzheimer's (for example) are in fact incurable. Neurons are dead and damaged, what was contained in them and in their arrangement is gone, and any repair system can only go so far. Too much information has been lost and too much entropy has been let in. I would like to be wrong about this, but I don't think I am.
But for less severe states and diseases, you can imagine various interventions - chemical, surgical, genetic - that could restore things. So the question here becomes whether there are drug-like solutions. The answer is tricky. If you look at a biochemical mechanism and can see that there's a particular pathway involving small molecules, then certainly, you can say that there could be a molecule to be found as a treatment, even if we haven't found it yet. But the first part of that last sentence has to be unpacked.
Take diabetes. Type I diabetes is proximately caused by lack of insulin, so the solution is to take insulin. And that works, although it's certainly not a cure, since you have to take insuin for the rest of your life, and it's impossible to take it in a way that perfectly mimics the way your body would adminster it, etc. A cure would be to have working beta-cells again that respond just the way they're supposed to, and that's less likely to be achieved through a drug therapy. (Although you could imagine some small molecule that affects a certain class of stem cell, causing it to start the program to differentiate into a fully-formed beta cell, and so on). You'd also want to know why the original population of cells died in the first place, and how to keep that from happening again, which might also take you to some immunological and cell-cycle pathways that could be modulated by drug molecules. But all of these avenues might just as easily take you into genetically modified cloned cell lines and surgical implantation, too, rather than anything involving small-molecule chemistry.
Here's another level of complexity, then: insulin is certainly a drug, but it's not a small molecule of the kind I'd be making. Is there a small molecular that can replace it? You'd do very well with that indeed, but the answer (I think) is "probably not". If you look at the receptor proteins that insulin binds to, the recognition surfaces that are used are probably larger than small molecules can mimic. No one's ever found a small molecule insulin mimetic, and I don't think anyone is likely to. (On the other hand, if you're trying to disrupt a protein-protein interaction, you have more hope, although that's still an extremely difficult target. We can disrupt things a lot more easily than we can make them work). Even if you found a small-molecule-insulin, you'd be faced with the problem of dosing it appropriately, which is no small challenge for a tightly and continuously regulated system like that one. (It's no small challenge for administering insulin itself, either).
And even for mechanisms that do involve small-molecule signaling, like the G-protein coupled receptors, there are still things to worry about. Take schizophrenia. You can definitely see problems with neural systems in the brain when you study that disease, and these neurons respond to, among other things, small-molecue neurotransmitters that the body makes and uses itself - dopamine, serotonin, acetylcholine and others. There are a certain number of receptors for each of those, and although we don't have all the combinations yet, I could imagine, on a philosophical level, that we could eventually have selective drugs that are agonists, antagonists, partial agonists, inverse agonists, what have you at all the subtypes. We have quite a few of them now, for some of the families. And I can even imagine that we could eventually have most or all of the combinations: a molecule that's a dopamine D2 agonist and a muscarinic M4 antagonist, all in one, and so on and so on. That's a lot more of a stretch, to be honest, but I'll stipulate that it's possible.
So you have them all. Now, which ones do you give to help a schizophrenic? We don't know. We have guesses and theories, but most of them are surely wrong. Every biochemical theory about schizophrenia is either wrong or incomplete. We don't know what goes wrong, or why, or how, or what might be done to bend things back in the right direction. It might be that we're in the same area as Alzheimer's: perhaps once a person's brain has developed in such a way that it slips into schizophrenia, that there is no way at all to rewire things, in the same way that we can't ungrow a tree in order to change the shape of its canopy. I've no idea, and we're going to know a lot more about the brain by the time we can answer that one.
So one problem with answering this question is that it's bounded not so much by chemistry as by biology. Lots and lots of biology, most of it unknown. But thinking in terms of sheer chemistry is interesting, too. Consider "The Library of Babel", the famous story by Jorge Luis Borges. It takes place in some sort of universe that is no more (and no less) than a vast library containing every possible book that can be be produced with a 25-character set of letters and punctuation marks. This is, as a bit of reflection will show, a very, very large number, one large enough to contain everything that can possibly be written down. And all the slight variations. And all the misprints. And all the scrambled coded versions of everything, and so on and so on. (W. v. O. Quine extended this idea to binary coding, which brings you back to computability).
Now think about the universe of drug-like molecules. It is also very large, although it is absolutely insignificant compared to the terrifying Library of Babel. (It's worth noting that the Library contains all of the molecules that can ever exist, coded in SMILES strings - that thought just occurred to me at this very moment, and gives me the shivers). The universe of proteins works that way, too - an alphabet of twenty-odd letters for amino acids gives you the exact same situation as the Library, and if you imagine some hideous notation for coding in all the folding variants and post-translational modifications, all the proteins are written down as well.
These, then, encompass everything chemical compound up to some arbitrary size, and the original question is, is this enough? Are there questions for which none of these words are the answer? That takes you into even colder and deeper philosophical waters. Wittgenstein (among many others) wondered the same thing about our own human languages, and seems to have decided that there are indeed things that cannot be expressed, and that this marks the boundary of philosophy itself. Famously, his Tractacus ends with the line "Wovon man nicht sprechen kann, darüber muss man schweigen": whereof we cannot speak, we must pass over in silence.
We're not at that point in the language of chemistry and pharmacology yet, and it's going to be a long, long time before we ever might be. Just the fact, though, that computability seems like such a more reasonable proposition in computer science than druggability does in biochemistry tells you a great deal about how different the two fields are.
Update: On the subject of computabiity, I'm not sure how I missed the chance to bring Gödel's Incompleteness Theorem into this, just to make it a complete stewpot of math and philosophy. But the comments to this post point out that even if you can write a program, you cannot be sure whether it will ever finish the calculation. This Halting Problem is one of the first things ever to be proved formally undecidable, and the issues it raises are very close to those explored by Gödel. But as I understand it, this is decidable for a machine with a finite amount of memory, running a deterministic program. The problem is, though, that it still might take longer than the expected lifetime of the universe to "halt", which leaves you, for, uh, practical purposes, in pretty much the same place as before. This is getting pretty far afield from questions of druggability, though. I think.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico
Now Sanofi is tangled up in trouble in China. The last few days have brought news of a wide-ranging investigation into payments to hospitals and medical workers, similar to what GlaxoSmithKline has been accused of.
And I don't have much reason to doubt either story, because (as this BBC story details) payments of this sort are rife. I would also note that, according to the AP, the Chinese government "is investigating production costs at 60 Chinese and foreign pharmaceutical manufacturers, according to state media, possible as a prelude to revising state-imposed price caps on key medications."
A system where everyone is in violation of the law has a lot of advantages - if you're the government. Retribution, when it's needed, is always at hand, because all you have to do is threaten to enforce what's already on the books. And lest someone think that I'm just beating away at the Chinese situation, the same applies to the US (on what I hope is a lower level). Here's economist Tyler Cowen, from the Marginal Revolution blog, on that very subject:
Faced with the evidence of an non-intentional crime, most prosecutors, of course, would use their discretion and not threaten imprisonment. Evidence and discretion, however, are precisely the point. Today, no one is innocent and thus our freedom is maintained only by the high cost of evidence and the prosecutor’s discretion.
The GSK and Sanofi allegations are, of course, all about intentional acts. But prosecuting them is very much up to the discretion of the Chinese authorities. If they're trying to root out corruption in their health care system, more power to them, because that's a worthy cause. But if they're just putting the squeeze on people long enough to bargain with them, only to let things return to the status quo ante after concessions have been extracted, then I have another opinion. Cynically, that's just what I expect to happen. After all, one might need to charge these companies with bribery again at some point. The Chinese authorities - authorities in general, all over the world - are not in the habit of putting down useful weapons and walking away from them.
+ TrackBacks (0) | Category: Business and Markets | The Dark Side
August 12, 2013
I've referenced this Matthew Herper piece on the cost of drug development several times over the last few years. It's the one where he totaled up pharma company R&D expenditures (from their own financial statements) and then just divided that by the number of drugs produced. Crude, but effective - and what it said was that some companies were spending ridiculous, unsustainable amounts of money for what they were getting back.
Now he's updated his analysis, looking at a much longer list of companies (98 of them!) over the past ten years. Here's the list, in a separate post. Abbott is at the top, but that's misleading, since they spent R&D money on medical devices and the like, whose approvals don't show up in the denominator.
But that's not the case for #2, Sanofi: 6 drugs approved during that time span, at a cost, on their books of ten billion dollars per drug. Then you have (as some of you will have guessed) AstraZeneca - four drugs at 9.5 billion per. Roche, Pfizer, Wyeth, Lilly, Bayer, Novartis and Takeda round out the top ten, and even by that point we're still looking at six billion a whack. One large company that stand out, though, is Bristol-Myers Squibb, coming in at #22, 3.3 billion per drug. The bottom part of the list is mostly smaller companies, often with one approval in the past ten years, and that one done reasonably cheaply. But three others that stand out as having spent significant amounts of money, while getting something back for it, are Genzyme, Shire, and Regeneron. Genzyme, of course, has now been subsumed in that blazing bonfire of R&D cash known as Sanofi, so that takes care of that.
Sixty-six of the 98 companies studied launched only one drug this decade. The costs borne by these companies can be taken as a rough estimate of what it takes to develop a single drug. The median cost per drug for these singletons was $350 million. But for companies that approve more drugs, the cost per drug goes up – way up – until it hits $5.5 billion for companies that have brought to market between eight and 13 medicines over a decade.
And he's right on target with the reason why: the one-approval companies on the list were, for the most part, lucky the first time out. They don't have failures on their books yet. But the larger organizations have had plenty of those to go along with the occasional successes. You can look at this situation more than one way - if the single-drug companies are an indicator of what it costs to get one drug discovered and approved, then the median figure is about $350 million. But keep in mind that these smaller companies can tend to go after a different subset of potential drugs. They're a bit more likely to pick things with a shorter, more defined clinical path, even if there isn't as big a market at the end, in order to have a better story for their investors.
Looking at what a single successful drug costs, though, isn't a very good way to prepare for running a drug company. Remember, the only small companies on this list are the ones that have suceeded, and many, many more of them spent all their money on their one shot and didn't make it. That's what's reflected in the dollars-per-drug figures for the larger organizations, that and the various penalties for being a huge organization. As Herper says:
Size has a cost. The data support the idea that large companies may be spend more per drug than small ones. Companies that spent more than $20 billion in R&D over the decade spent $6.3 billion per new drug, compared to $2.8 billion for those that had budgets of between $5 billion and $10 billion. Some CEOs, notably Christopher Viehbacher at Sanofi, have faced low R&D productivity in part by cutting the budget. This may make sense in light of this data. But it is worth noting that the bigger firms brought twice as many drugs to market. It still could be that the difference between these two groups is due to smaller companies not bearing the full financial weight of the risk of failure.
There are other factors that kick these numbers around a bit. As Herper points out, there's a tax advantage for R&D expenditures, so there's no incentive to under-report them (but there's also an IRS to keep you from going wild over-reporting them, too). And some of the small companies on the list picked up their successes by taking on failed programs from larger outfits, letting them spend a chunk of R&D cash on the drugs beforehand. But overall, the picture is just about as grim as you'd have figured, if not a good deal more so. Our best hope is that this is a snapshot of the past, and not a look into the future. Because we can't go on like this.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
The New York Times had a rather confusing story the other day about the PTEN gene, autism, and cancer. Unfortunately, it turned into a good example of how not to explain a subject like this, and it missed out (or waited too long) to explain a number of key concepts. Things like "one gene can be responsible a lot of different things in a human phenotype", and "genes can have a lot of different mutations, which can also do different things", and "autism's genetic signature is complex and not well worked out, not least because it's such a wide-ranging diagnosis", and (perhaps most importantly, "people with autism are not doomed to get cancer".
Let me refer you to Emily Willingham at Forbes, who does a fine job of straightening things out here. I fear that what can happen at the Times (and other media outlets as well) is that when a reporter scrambles a science piece, there's no one else on the staff who's capable of noticing it. So it just runs as is.
+ TrackBacks (0) | Category: Cancer | The Central Nervous System
August 9, 2013
Here's an interview with Liu Xeubin, formerly of GlaxoSmithKline in China. That prospect should perk up the ears of anyone who's been following the company's various problems and scandals in that country.
Liu Xuebin recalls working 12-hour shifts and most weekends for months, under pressure to announce research results that would distinguish his GlaxoSmithKline Plc (GSK) lab in China as a force in multiple sclerosis research.
It paid off -- for a while. Nature Medicine published findings about a potential new MS treatment approach in January 2010 and months later Liu was promoted to associate director of Glaxo’s global center for neuro-inflammation research in Shanghai. Two months ago, his career unraveled. An internal review found data in the paper was misrepresented. Liu, 45, who stands by the study, was suspended from duty on June 8 and quit two days later.
Liu was the first author on the disputed paper, but he says that he stands by it, and opposed a retraction (only he and one other author, out of 18, did so). He had been at the NIH for several years before being hired back to Shanghai by Glaxo, which turned out to be something of a change:
“This was my first job in industry and there was a very different culture,” Liu said behind thick, rimless glasses and dressed in a short-sleeve checked shirt tucked neatly into his belted trousers. “I was also not experienced with compliance back then, and we didn’t pay enough attention to things such as recording of reports from our collaborators.”
There was also a culture in which Glaxo scientists were grouped into competitive teams, known as discovery performance units, which vied internally for funds every three years, he said. Those who failed to meet certain targets risked being disbanded.
What I find odd is Liu's emphasis on publishing, and publishing first. That seems like a very academic mindset - I have to tell you, over my time in industry, rarely have I ever felt a sense of urgency to publish my results in a journal. And even those exceptions have been for other reasons, usually the "If we're going to write this stuff up, now's the time" sort. Never have I felt that we were racing to get something into, say, Nature Medicine before someone else did. Getting something patented before someone else, into the clinic before someone else? Oh, yes indeed. But not into some journal.
But neither have I been part of a far-flung research site, on which a lot of money had been spent, trying to show that it was all worthwhile. Maybe that's the difference. Even so, if the results that the Shanghai group got were really important for an approach to multiple sclerosis therapy, that's all the more reason why the findings should have spoken for themselves inside the company (and been the subject of immediate further development, too). We don't have to get Nature Medicine (or whoever) to validate things for us: "Oh, wow, that stuff must be real, the journal accepted our paper". A company doesn't demonstrate that it finds something valuable by sending it out to a big-name journal, at least not at first: it does that by spending more time and money on the idea.
But Liu doesn't talk the way that I would expect in this article, and I feel sure that the Bloomberg reporter on this piece didn't pick up on it. There's no "We delivered a new MS program, we validated a whole new group of drug targets, we identified a high-profile clinical candidate that went immediately into development". That's how someone in drug R&D would put it. Not "We were racing to publish our results". It's all quite odd.
+ TrackBacks (0) | Category: Drug Development | The Dark Side
August 8, 2013
Chemistry Blog has more on the incident picked up first at ChemBark and noted here yesterday. This rapidly-becoming-famous case has the Supporting Information file of a paper published at Organometallics seemingly instructing a co-author to "make up" an elemental analysis to put in the manuscript.
Now the editor of the journal (John Gladysz of Texas A&M) has responded to Chemistry Blog as follows:
Wednesday 07 August
Dear Friends of Organometallics,
Chemical Abstracts alerted us to the statement you mention,which was overlooked during the peer review process, on Monday 05 August. At that time, the manuscript was pulled from the print publication queue. The author has explained to us that the statement pertains to a compound that was ”downgraded” from something being isolated to a proposed intermediate. Hence, we have left the ASAP manuscript on the web for now. We are requiring that the author submit originals of the microanalysis data before putting the manuscript back in the print publication queue. Many readers have commented that the statement reflects poorly on the moral or ethical character of the author, but the broad “retribution” that some would seek is not our purview. As Editors, our “powers” are limited to appropriate precautionary measures involving future submissions by such authors to Organometallics, the details of which would be confidential (ACS Ethical Guidelines, http://pubs.acs.org/page/policy/ethics/index.html). Our decision to keep the supporting information on the web, at least for the time being, is one of transparency and honesty toward the chemical community. Other stakeholders can contemplate a fuller range of responses. Some unedited opinions from the community are available in the comments section of a blog posting: http://blog.chembark.com/2013/08/06/a-disturbing-note-in-a-recent-si-file/#comments
If you have any criticisms of the actions described above, please do not hesitate to share them with me. Thanks much for being a reader of Organometallics, and best wishes. . .
This is the first report of the corresponding author, Reto Dorta, responding about this issue (several other people have tried to contact him, with no apparent success). So much for the theory, advanced by several people in the comments section at ChemBark, that "make up" was being used in the British-English sense of "prepare". Gladysz's letter gets across his feelings about the matter pretty clearly, I'd say.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Fragment-based screening comes up here fairly often (and if you're interested in the field, you should also have Practical Fragments on your reading list). One of the complaints both inside and outside the fragment world is that there are a lot of primary hits that fall into flat/aromatic chemical space (I know that those two don't overlap perfectly, but you know the sort of things I mean). The early fragment libraries were heavy in that sort of chemical matter, and the sort of collections you can buy still tend to be.
So people have talked about bringing in natural-product-like structures, and diversity-oriented-synthesis structures and other chemistries that make more three-dimensional systems. The commercial suppliers have been catching up with this trend, too, although some definitions of "three-dimensional" may not match yours. (Does a biphenyl derivative count, or is that what you're trying to get away from?)
The UK-based 3D Fragment Consortium has a paper out now in Drug Discovery Today that brings together a lot of references to work in this field. Even if you don't do fragment-based work, I think you'll find it interesting, because many of the same issues apply to larger molecules as well. How much return do you get for putting chiral centers into your molecules, on average? What about molecules with lots of saturated atoms that are still rather squashed and shapeless, versus ones full of aromatic carbons that carve out 3D space surprisingly well? Do different collections of these various molecular types really have differences in screening hit rates, and do these vary by the target class you're screening against? How much are properties (solubility, in particular) shifting these numbers around? And so on.
The consortium's site is worth checking out as well for more on their activities. One interesting bit of information is that the teams ended up crossing off over 90% of the commercially available fragments due to flat structures, which sounds about right. And that takes them where you'd expect it to:
We have concluded that bespoke synthesis, rather than expansion through acquisition of currently available commercial fragment-sized compounds is the most appropriate way to develop the library to attain the desired profile. . .The need to synthesise novel molecules that expand biologically relevant chemical space demonstrates the significant role that academic synthetic chemistry can have in facilitating target evaluation and generating the most appropriate start points for drug discovery programs. Several groups are devising new and innovative methodologies (i.e. methyl activation, cascade reactions and enzymatic functionalisation) and techniques (e.g. flow and photochemistry) that can be harnessed to facilitate expansion of drug discovery-relevant chemical space.
And as long as they stay away from the frequent hitters/PAINS, they should end up with a good collection. I look forward to future publications from the group to see how things work out!
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays | In Silico
August 7, 2013
Bruce Booth (of Atlas Venture Capital) has a provocative post up at Forbes on what he would do if he were the R&D head of a big drug company. He runs up his flag pretty quickly:
I don’t believe that we will cure the Pharma industry of its productivity ills through smarter “operational excellence” approaches. Tweaking the stage gates, subtly changing attrition curves, prioritizing projects more effectively, reinvigorating phenotypic screens, doing more of X and less of Y – these are all fine and good, and important levers, but they don’t hit the key issue – which is the ossified, risk-avoiding, “analysis-paralysis” culture of the modern Pharma R&D organization.
He notes that the big companies have all been experimenting with ways to get more new thinking and innovation into their R&D (alliances with academia, moving people to the magic environs of Cambridge (US or UK), and so on). But he's pretty skeptical about any of this working, because all of this tends to take place out on the edges. And what's in the middle? The big corporate campus, which he says "has become necrotic in many companies". What to do with it? He has several suggestions, but here's a big one. Instead of spending five or ten per cent of the R&D budget on out-there collaborations, why not, he says, go for broke:
Taken further, bringing the periphery right into the core is worth considering. This is just a thought experiment, and certainly difficult to do in practice, but imagine turning a 5000-person R&D campus into a vibrant biotech park. Disaggregate the research portfolio to create a couple dozen therapeutically-focused “biotech” firms, with their own CEOs, responsible for a 3-5 year plan and with a budget that maps to that plan. Each could have its own Board and internal/external advisors, and flexibility to engage free market service providers outside the biotech park. Invite new venture-backed biotechs and CROs to move into the newly rebranded biotech park, incentivized with free lab space, discounted leases, access to subsidized research capabilities, or even unencumbered matching grants. Put some of the new spin-outs from their direct academic initiatives into the mix. But don’t put strings on those new externally-derived companies like the typical Pharma incubator; these will constrain the growth of these new companies. Focus this big initiative on one simple benefit: strategic proximity to a different culture.
His second big recommendation is "Get the rest of the company out of research's way". And by that, he especially means the commercial part of the organization:
One immediate solution would be to kick Commercial input out of decision-making in Research. Or, more practically, at least reduce it dramatically. Let them know that Research will hand them high quality post-PoC Phase 3-ready programs addressing important medical needs. Remove the market research gates and project NPV assessment models from critical decision-making points. Ignore the commercially-defined “in” vs “out” disease states that limit Research teams’ degrees of freedom. Let the science and medicine guide early program identification and progress. . .If you don’t trust the intellect of your Research leaders, then replace them. But second-guessing, micro-managing, and over-analyzing doesn’t aid in the exploration of innovation.
His last suggestion is to shake up the Board of Directors, and whatever Scientific Advisory Board the company has:
Too often Pharma defaults to not engaging the outside because “they know their programs best” or for fear of sharing confidential information that might leak to its competition. Reality is the latter is the least of their worries, and I’ve yet to hear this as being a source of profound competitive intelligence leakage. A far worse outcome is unchallenged “group think” about the merits (or demerits) of a program and its development strategy. Importantly, I’m not talking about specific Key Opinion Leader engagement on projects, as most Pharma companies do this effectively already. I’m referring to a senior, strategic, experienced advisory function from true practitioners in the field to help the R&D leadership team get a fresh perspective.
This is part of the "get some outside thinking" that is the thrust of his whole article. I can certainly see where he's coming from, and I think that this sort of thing might be exactly what some companies need. But what are the odds of (a) their realizing that and (b) anything substantial being done about it? I'm not all that optimistic - and, to be sure, Booth's article also mentions that some of these ideas might well be unworkable in practice.
I think that's because there's another effect that all of Bruce's recommendations have: they decrease the power and influence of upper management. Break up your R&D department, let in outside thinking, get your people to strike out pursuing their own ideas. . .all of those cut into the duties of Senior Executive Vice Presidents of Strategic Portfolio Planning, you know. Those are the sorts of people who will have to sign off on such changes, or who will have a chance to block them or slow their implementation. You'll have to sneak up on them, and there might not be enough time to do that in some of the more critical cases.
Another problem is what the investors would do if you tried some of the more radical ideas. As the last part of the post points out, we have a real problem in this business with our relationship with Wall Street. The sorts of people who want quarter-by-quarter earnings forecasts would absolutely freak if you told them that you were tearing the company up into a pile of biotechs. (And by that, I mean tearing it up for real, not created centers-of-innovation-excellence or whatever the latest re-org chart might call it). It's hard to think of a good way out of that one, too, for a large public company.
Now, there are people out there who have enough nerve and enough vision to try some things in this line, and once in a while you see it happen. But inertial forces are very strong indeed. With some organizations, it might be less work to just start over, rather than to spend all that effort tearing down the things you want to get rid of. For all I know, this is what (say) AstraZeneca has in mind with its shakeup and moving everyone to Cambridge. But what systems and attitudes are going to be packed up and moved over along with all the boxes of lab equipment?
+ TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why
A reader sends this new literature citation along, from Organometallics. He directed my attention to the Supplementary Information file, page 12. And what do we find there?
. . .Solvent was then removed to leave a yellow residue in the vial, the remaining clear, yellow solution was concentrated to a volume of about 1ml, and diethyl ether was added in a dropwise manner to the stirred solution to precipitate a yellow solid. The vial was centrifuged so the supernatant solvent could be decanted off by Pasteur pipette. The yellow solid was washed twice more with ether and the dried completely under high vacuum to give 99mg (93% yield) of product.
Emma, please insert NMR data here! where are they? and for this compound, just make up an elemental analysis...
And don't forget to proofread the manuscript, either, while you're at it. Oops.
Update: I see that Chembark is on this one, and has gone as far as contacting the corresponding author, whose day has gotten quite a bit longer. . .
+ TrackBacks (0) | Category: Analytical Chemistry | The Scientific Literature
August 6, 2013
Some of you may enjoy See Arr Oh's "Open Letter to Biotech Recruiters". Or then again, you may find that you don't enjoy it one tiny bit. Either way, it's worth a read, and some thought.
+ TrackBacks (0) | Category: How To Get a Pharma Job
Here's a provocative post over at Chemjobber's blog, taking off from a letter to C&E News. James Collmann (emeritus at Stanford) wrote in about a recent article on Chinese scientists returning to that country to take academic positions. He mentions, as a "widely known but seldom discussed" problem, that large research grants in China require an illegal kickback, in cash, to someone at the granting agency.
Having never applied for a grant in China, I have no testimony to offer here. Some readers may, though, be able to shed some light on this from their own experiences. I will say, however, that I do not find this unbelievable at all.
And for anyone who wants to pop up in the comments section and accuse me of blind anti-Chinese bias, the reason I find this plausible is because of the way politics worked back where I grew up in Arkansas. We had a number of officials in my part of the state whose career trajectories ended up with an encounter with federal prosecutors because of this same attitude. No substantial sum of money could change hands, these folks seemed to think, without some of it sticking to theirs along the way. Road and construction projects were particularly favored for this kind of thing, but it certainly didn't, and doesn't, stop there.
And lest someone pop up in the comments to accuse me of blind anti-Arkansas bias (which hasn't happened yet, although you never know), I adduce a long list of politicians and officials from other US states, with ex-governor Rod Blagojevich of Illinois coming to mind immediately. But one could just as easily reel off names from Rhode Island, Louisiana, Connecticut, Ohio, New York, Arizona, Massachusetts and many another state beside. The only differences between them, and between them and China (or between China and dozens and dozens of other countries) is how common this behavior might be, on what scale it is practiced, and how likely it is to be uncovered or punished. Differences in degree, in other words, not in kind.
And Chemjobber's commenters waste no time in mentioning the "overhead" system built into academic grants in the US. Universities have a standard rake that they take off the top of every grant that comes in, as most of you will know. Lest you think that it's the smaller and hungrier schools that do this the most, the overhead percentage is famously highest at some of the most prestigious places. This goes for administration (a roomy category), paying the salaries of faculty who have tenure but bring in no grants themselves, paying that salaries of entire departments who bring in precious little grant money themselves (because there's precious little to be given in their subjects), and so on. Not all of these are illegitimate uses, by any means, but I think a lot of people outside of academia might still be struck by how much money changes hands, and by what percentage of each hundred thousand that Professor X pulls in for research actually ends up going to Professor X's research. In their defense, though, I will say that these overhead arrangements in the US are made explicit, although they're not exactly advertised, and are used by the universities themselves, rather than quietly lining the pockets of someone at a granting agency.
At any rate, if anyone knows more about these accusations concerning China, please let us know in the comments. And if anyone finds them unbelievable prima facie, let us know about that, too. That would be nearly as interesting.
Update: it's been pointed out to me that there are very specific regulations in the US about using overhead funds for salaries (and many other restrictions, besides). I take the point; I've never had to wade through that paperwork. But I wonder - if this money is going into some other (approved) pile at the University, does that not somehow, some way, follow on through the various budgetary piles to free up money for those other uses?
+ TrackBacks (0) | Category: The Dark Side
August 5, 2013
Here's more on the problems with non-reproducible results in the literature (see here for previous blog entries on this topic). Various reports over the last few years indicate that about half of the attention-getting papers can't actually be replicated by other research groups, and the NIH seems to be getting worried about that:
The growing problem is threatening the reputation of the US National Institutes of Health (NIH) based in Bethesda, Maryland, which funds many of the studies in question. Senior NIH officials are now considering adding requirements to grant applications to make experimental validations routine for certain types of science, such as the foundational work that leads to costly clinical trials. As the NIH pursues such top-down changes, one company is taking a bottom-up approach, targeting scientists directly to see if they are willing to verify their experiments. . .
. . .Last year, the NIH convened two workshops that examined the issue of reproducibility, and last October, the agency’s leaders and others published a call for higher standards in the reporting of animal studies in grant applications and journal publications. At a minimum, they wrote, studies should report on whether and how animals were randomized, whether investigators were blind to the treatment, how sample sizes were estimated and how data were handled.
The article says that the NIH is considering adding some sort of independent verification step for some studies - those that point towards clinical trials or new modes of treatment, most likely. Tying funding (or renewed funding) to that seems to make some people happy, and others, well:
The very idea of a validation requirement makes some scientists queasy. “It’s a disaster,” says Peter Sorger, a systems biologist at Harvard Medical School in Boston, Massachusetts. He says that frontier science often relies on ideas, tools and protocols that do not exist in run-of-the-mill labs, let alone in companies that have been contracted to perform verification. “It is unbelievably difficult to reproduce cutting-edge science,” he says.
But others say that independent validation is a must to counteract the pressure to publish positive results and the lack of incentives to publish negative ones. Iorns doubts that tougher reporting requirements will make any real impact, and thinks that it would be better to have regular validations of results, either through random audits or selecting the highest-profile papers.
I understand the point that Sorger is trying to make. Some of this stuff really is extremely tricky, even when it's real. But at some point, reproducibility has to be a feature of any new scientific discovery. Otherwise, well, we throw it aside, right? And I appreciate that there's often a lot of grunt work involved in getting some finicky, evanescent result to actually appear on command, but that's work that has to be done by someone before a discovery has value.
For new drug ideas, especially, those duties hae traditionally landed on the biopharma companies themselves - you'll note that the majority of reports about trouble with reproducing papers comes from inside the industry. And it's a lot of work to bring these things along to the point where they can hit their marks every time, biologically and chemically. Academic labs don't spend too much time trying to replicate each other's studies; they're too busy working on their own things. When a new technique catches on, it spreads from lab to lab, but target-type discoveries, something that leads to a potential human therapy, often end up in the hands of those of us who are hoping to be able to eventually sell it. We have a big interest in making sure they work.
Here's some of the grunt work that I was talking about:
On 30 July, Science Exchange launched a programme with reagent supplier antibodies-online.com, based in Aachen, Germany, to independently validate research antibodies. These are used, for example, to probe gene function in biomedical experiments, but their effects are notoriously variable. “Having a third party validate every batch would be a fabulous thing,” says Peter Park, a computational biologist at Harvard Medical School. He notes that the consortium behind ENCODE — a project aimed at identifying all the functional elements in the human genome — tested more than 200 antibodies targeting modifications to proteins called histones and found that more than 25% failed to target the advertised modification.
I have no trouble believing that. Checking antibodies, at least, is relatively straighforward, but that's because they're merely tools to find the things that point towards the things that might be new therapies. It's a good place to start, though. Note that in this case, too, there are commercial considerations at work, which do help to focus things and move them along. They're not the magic answer to everything, but market forces sure do have their place.
The big questions, at all these levels, is who's going to do the follow-up work and who's going to pay for it. It's a question of incentives: venture capital firms want to be sure that they're launching a company whose big idea is real. The NIH wants to be sure that they're funding things that actually work and advance the state of knowledge. Drug companies want to be sure that the new ideas they want to work on are actually based in reality. From what I can see, the misalignment comes in the academic labs. It's not that researchers are indifferent to whether their new discoveries are real, of course - it's just that by the time all that's worked out, they may have moved on to something else, and it might all just get filed away as Just One Of Those Things. You know, cutting-edge science is hard to reproduce, just like that guy from Harvard was saying a few paragraphs ago.
So it would help, I think to have some rewards for producing work that turned out to be solid enough to be replicated. That might slow down the rush to publish a little bit, to everyone's benefit.
+ TrackBacks (0) | Category: Academia (vs. Industry)
August 2, 2013
The Baran group at Scripps has a whopper of a total synthesis out in Science. They have a route to the natural product ingenol, which is isolated from a Euphorbia species, a genus that produces a lot of funky diterpenoids. A synthetic ester of the compound as recently been approved to treat actinic keratosis, a precancerous skin condition brought on by exposure to sunlight.
The synthesis is 14 steps long, but that certainly doesn't qualify it for the "whopper" designation that I used. There are far, far longer total syntheses in the literature, but as organic chemists are well aware, a longer synthesis is not a better one. The idea is to make a compound as quickly and elegantly as possible, and for a compound like ingenol, 14 steps is pretty darn quick.
I'll forgo the opportunity for chem-geekery on the details of the synthesis itself (here's a write-up at Chemistry World). it is, of course, a very nice approach to the compound, starting from the readily available natural product (+) 3-carene, which is a major fraction of turpentine. There's a pinacol rearrangement as a key step, and from this post at the Baran group blog, you can see that it was a beast. Most of 2012 seems to have been spent on that one reaction, and that's just what high-level total synthesis is like: you have to be prepared to spend months and months beating on reactions in every tiny, picky variation that you can imagine might help.
Let me speak metaphorically, for those outside the field or who have never had the experience. Total synthesis of a complex natural product is like. . .it's like assembling a huge balloon sculpture, all twists and turns, knots and bulges, only half of the balloons are rubber and half of them are made of blown glass. And you can't just reach in and grab the thing, either, and they don't give you any pliers or glue. What you get is a huge pile of miscellaneous stuff - bamboo poles, cricket bats, spiral-wound copper tubing, balsa-wood dowels, and several barrels of even more mixed-up junk: croquet balls, doughnuts, wadded-up aluminum foil, wobbly Frisbees, and so on.
The balloon sculpture is your molecule. The piles of junk are the available chemical methods you use to assemble it. Gradually, you work out that if you brace this part over here in a cradle of used fence posts, held together with turkey twine, you can poke this part over here into it in a way that makes it stick if you just use that right-angled metal doohicky to hold it from the right while you hit the top end of it with a thrown tennis ball at the right angle. Step by step, this is how you proceed. Some of the steps are pretty obvious, and work more or less the way you pictured them, using things that are on top of one of the junk piles. Others require you to rummage through the whole damn collection, whittling parts down and tying stuff together to assemble some tool that you don't have, maybe something that no one has ever made at all.
What I like most about this new synthesis is that it's done on a real scale. LEO Pharmaceuticals is the company that sells the ingenol gel, and they're interested in seeing if there's something better. That post from Baran's group shows people holding flasks with grams of material in them. Mind you, that's what you need to get all these reactions figured out; I can only imagine how much material they must have burned off trying to get some of these steps optimized. But now that it's worked out, real quantities of analogs can be produced. Everyone who does total synthesis talks about making analogs for testing, but the follow-through is sometimes lacking. This one looks like it'll be more robust. Congratulations to everyone involved - with any luck, you'll never have to do something like this again, unless it's by choice!
Update: here's more from Carmen Drahl at C&E News.
+ TrackBacks (0) | Category: Chemical News | Natural Products
August 1, 2013
This has just shown up in the comments section, so I wanted to note it out here on the front page. The New Jersey state workforce directory lists Merck as notifying them that they plan to eliminate up to 113 jobs in Kenilworth, with an effective date of October 1.
I don't know what the follow-through is on notices like this, or the legal consequences thereof. And there's certainly nothing in there about what sorts of reductions these might be. But it's worth noting that the company has at least filed the paperwork - has anyone in Kenilworth heard more?
+ TrackBacks (0) | Category: Business and Markets
Everyone in biomedical research is familiar with "knockout" mice, animals that have had a particular gene silenced during their development. This can be a powerful way of figuring out what that gene's product actually does, although there are always other factors at work. The biggest one is how other proteins and pathways can sometimes compensate for the loss, a process often doesn't have a chance to kick in when you come right into an adult animal and block a pathway through other means. In some other cases, a gene knockout turns out to be embryonic-lethal, but can be tolerated in an adult animal, once some key development pathway has run its course.
There have been a lot of knockout mice over the years. Targeted genetic studies have described functions for thousands of mouse genes. But when you think about it, there have surely been many of these whose phenotypes have not really been noticed or studied in the right amount of detail. Effects can be subtle, and there's an awful lot to look for. That's the motivation behind the Sanger Institute Mouse Genetics Project, who have a new paper out here. They're part of the even larger International Mouse Phenotyping Consortium, which is co-ordinating efforts like this across several sites.
Update: here's an overview of the work being done. For generating knockout animals, you have the International Knockout Mouse Consortium at an international level - the IKMC, mentioned above, is the phenotyping arm of the effort. In the US, the NIH-funded Knockout Mouse Project (KOMP) is a major effort, and in Europe you have the European Conditional Mouse Mutagenesis Program (EUCOMM), which has evolved into EUCOMMTOOLS. Then in Canada you have NorCOMM, and TIGM at Texas A&M.
I like the way that last link's abstract starts: "Nearly 10 years after the completion of the human genome project, and the report of a complete sequence of the mouse genome, it is salutary to reflect that we remain remarkably ignorant of the function of most genes in the mammalian genome." That's absolutely right, and these mouse efforts are an attempt to address that directly. The latest paper describes the viability of 489 mutants, and a more complete analysis of 250 of them - still only a tiny fraction of what's out there, but enough to give you a look behind the curtain.
29% of the mutants were lethal and 13% were subviable, producing only a fraction of the expected number of embryos. That's pretty much in line with earlier estimates, so that figure will probably hold up. As for fertility, a bit over 5% of the homozygous crosses were infertile - and in almost all cases, the trouble was in the males. (All the heterozygotes could produce offspring).
The full phenotypic analysis on the first 250 mutants is quite interesting (and can be found at the Sanger Mouse Portal site.. Most of these are genes with some known function, but 34 of them have not had anything assigned to them until now. These animals were assessed through blood chemistry, gene expression profiling, dietary and infectious disease challenges, behavioral tests, necropsy and histopathology, etc. Among the most common changes were body weight and fat/lean ratios (mostly on the underweight side), but there were many others. (That body weight observation is, in most cases, almost certainly not a primary effect. Reproductive and musculoskeletal defects were the most common categories that were likely to be front-line problems).
What stands out is that the unassigned genes seemed to produce noticeable phenotypic changes at the same rate as the known ones, and that even the studied genes turned up effects that hadn't been realized. As the paper says, these results "reveal our collective inability to predict phenotypes based on sequence or expression pattern alone." About 35% of the mutants (of all kinds) showed no detectable phenotypic changes, so these are either nonessential genes or had phenotypes that escaped the screens. The team looked at heterozygotes in cases where the homozygotes were lethal or nearly so (90 lines so far), and haploinsufficiency (problems due to only one working copy of a gene) was a common effect, seen in over 40% of those mutants.
Genes with some closely related paralog were found to be less likely to be essential, but those producing a protein known to be part of a protein complex were more likely to be so. Both of those results make sense. But a big question is how well these results will translate to understanding of human disease, and that's still an open issue. Clearly, many things will be directly applicable, but some care will be needed:
The data set reported here includes 59 orthologs of known human disease genes. We compared our data with human disease features described in OMIM. Approximately half (27) of these mutants exhibited phenotypes that were broadly consistent with the human phenotype. However, many additional phenotypes were detected in the mouse mutants suggesting additional features that might also occur in patients that have hitherto not been reported. Interestingly, a large proportion of genes underlying recessive disorders in humans are homozygous lethal in mice (17 of 37 genes), possibly because the human mutations are not as disruptive as the mouse alleles.
As this work goes on, we're going to learn a lot about mammalian genetics that has been hidden. The search for similar effects in humans will be going on simultaneously, informed by the mouse results. Doing all this is going to keep a lot of people busy for a long time - but understanding what comes out is going to be an even longer-term occupation. Something to look forward to!
+ TrackBacks (0) | Category: Biological News
› gippgig on Tiny Details, Not So Tiny
› brock on Standards of Proof
› Cristine Checca on Testosterone, Carbon Isotopes, and Floyd Landis
› pete on Science Gifts: Telescopes, Etc.
› Torson on David Cameron, The Press, Alzheimer's, and Hope
› Torson on David Cameron, The Press, Alzheimer's, and Hope