Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
I've had a few people send along this article, on the possible toxicological effects of the herbicide glyphosate, wondering what I make of it as a medicinal chemist. It's getting a lot of play in some venues, particularly the news-from-Mother-Nature outlets. After spending some time reading this paper over, and looking through the literature, I've come to a conclusion: it is, unfortunately, a load of crap.
The authors believe that glyphosate is responsible for pretty much every chronic illness in humans, and a list of such is recited several times during the course of the long, rambling manuscript. Their thesis is that the compound is an inhibitor of the metabolizing CYP enzymes, of the biosynthesis of aromatic amino acids by gut bacteria, and of sulfate transport. But the evidence given for these assertions, and their connection with disease, while it might look alarming and convincing to someone who has never done research or read a scientific paper, is a spiderweb of "might", "could", "is possibly", "associated with", and so on. The minute you look at the actual evidence, things disappear.
Here's an example - let's go right to the central thesis that glyphosate inhibits CYP enzymes in the liver. Here's a quote from the paper itself:
A study conducted in 1998 demonstrated that glyphosate inhibits cytochrome P450 enzymes in plants . CYP71s are a class of CYP enzymes which play a role in detoxification of benzene compounds. An inhibitory effect on CYP71B1l extracted from the plant, Thlaspi arvensae, was demonstrated through an experiment involving a reconstituted system containing E. coli bacterial membranes expressing a fusion protein of CYP71B fused with a cytochrome P450 reductase. The fusion protein was assayed for activity level in hydrolyzing a benzo(a)pyrene, in the presence of various concentrations of glyphosate. At 15 microM concentration of glyphosate, enzyme activity was reduced by a factor of four, and by 35 microM concentration enzyme activity was completely eliminated. The mechanism of inhibition involved binding of the nitrogen group in glyphosate to the haem pocket in the enzyme.
A more compelling study demonstrating an effect in mammals as well as in plants involved giving rats glyphosate intragastrically for two weeks . A decrease in the hepatic level of cytochrome P450 activity was observed. As we will see later, CYP enzymes play many important roles in the liver. It is plausible that glyphosate could serve as a source for carcinogenic nitrosamine exposure in humans, leading to hepatic carcinoma. N-nitrosylation of glyphosate occurs in soils treated with sodium nitrite , and plant uptake of the nitrosylated product has been demonstrated . Preneoplastic and neoplastic lesions in the liver of female Wistar rats exposed to carcinogenic nitrosamines showed reduced levels of several CYP enzymes involved with detoxification of xenobiotics, including NADPH-cytochrome P450 reductase and various glutathione transferases . Hence this becomes a plausible mechanism by which glyphosate might reduce the bioavailability of CYP enzymes in the liver.
Glyphosate is an organophosphate. Inhibition of CYP enzyme activity in human hepatic cells is a well-established property of organophosphates commonly used as pesticides . In , it was demonstrated that organophosphates upregulate the nuclear receptor, constitutive androstane receptor (CAR), a key regulator of CYP activity. This resulted in increased synthesis of CYP2 mRNA, which they proposed may be a compensation for inhibition of CYP enzyme activity by the toxin. CYP2 plays an important role in detoxifying xenobiotics .
Now, that presumably sounds extremely detailed and impressive if you don't know any toxicology. What you wouldn't know from reading through all of it is that their reference 121 actually tested glyphosate against human CYP enzymes. In fact, you wouldn't know that anyone has ever actually done such an experiment, because all the evidence adduced in the paper is indirect - this species does that, so humans might do this, and this might be that, because this other thing over here has been shown that it could be something else. But the direct evidence is available, and is not cited - in fact, it's explicitly ignored. Reference 121 showed that glyphosate was inactive against all human CYP isoforms except 2C9, where it had in IC50 of 3.7 micromolar. You would also not know from this new paper that there is no way that ingested glyphosate could possibly reach levels in humans to inhibit CYP2C9 at that potency.
I'm not going to spend more time demolishing every point this way; this one is representative. This paper is a tissue of assertions and allegations, a tendentious brief for the prosecution that never should have been published in such a form in any scientific journal. Ah, but it's published in the online journal Entropy, from the MDPI people. And what on earth does this subject have to do with entropy, you may well ask? The authors managed to work that into the abstract, saying that glyphosate's alleged effects are an example of "exogenous semiotic entropy". And what the hell is that, you may well ask? Why, it's a made-up phrase making its first appearance, that's what it is.
But really, all you need to know is that MDPI is the same family of "journals" that published the (in)famous Andrulis "Gyres are the key to everything!" paper. And then made all kinds of implausible noises about layers of peer review afterwards. No, this is one of the real problems with sleazy "open-access" journals. They give the whole idea of open-access publishing a black eye, and they open the floodgates to whatever ridiculous crap comes in, which then gets "peer reviewed" and "published" in an "actual scientific journal", where it can fool the credulous and mislead the uninformed.
I'm in Madison, Wisconsin, where I'll be giving the Organic Chemistry McElvain Seminar later on today. The title of my talk, which I'm not sure if I'll live up to or not, is "Medicinal Chemistry: Getting Old, Or Just Starting to Grow Up?". It's at 3:30 in the Seminar Hall, room 1315, if you're passing through (!)
There's been a lot of rumbling recently about the price of new cancer drugs (see this article for a very typical reaction). It's a topic that's come up around here many times, as would be only natural - scrolling back in this category will turn up a whole list of posts.
All this adds up to a giant pushback against the astronomical drug prices that are becoming commonplace. It seems that price tags of $100,000 or above are becoming the norm. Of 12 cancer drugs approved in 2012, 11 cost more than that. As more drugs are offered at that level and their sponsors get away with it, it seems to set a floor that emboldens drug companies to push the envelope. They are badly misjudging the brewing anger.
The industry’s standard defense has been to run warm-hearted stories about the wonders of biomedical innovation, and to point out that drugs represent only 10% of healthcare costs. Both arguments miss the point. Everyone loves biomedical innovation, but the industry’s annual output of 25 to 35 new drugs is a lousy return for its $135 billion R&D spending. . .
That's a real problem. We in the industry concentrate on our end of it, where we wonder how we can spend this much for our discovery efforts and survive. But there are several sides to the issue. From one angle, as long as we can jack up the prices high enough on what does get through, we can (in theory) stay in business. That's not going to happen. There are limits to what we can charge, and we're starting to bang up against them, in the way that a Martingale player at a roulette table learns why casinos have betting limits at the tables. It's not a fun barrier to bump into.
And there's the problem Munos brings up, which is one that investors have been getting antsy about for some time: return on capital. The huge amounts of money going out the door are (at least in some cases) not sustainable. But we're not spending our money as if there were a problem:
Perhaps the mood would be different if the industry was a model of efficiency, but this is hardly the case. Examples of massive waste are on display everywhere: Pfizer wants to flatten a 750,000-square-foot facility in Groton, CT, and won’t entertain proposals for alternative uses. Lilly writes off over $100 million for a half-built insulin plant in Virginia, only to restart the project a few years later in Indiana. AstraZeneca shutters its R&D labs at Alderley Park and goes on to spend $500 million on a new facility in Cambridge.
Munos is right. We have enough trouble already without asking for more. Don't we?
That Lamar Smith proposal I wrote about earlier this morning can be summarized as "Why don't you people just work on the good stuff?" And I thought it might be a good time to link back to a personal experience I had with just that worldview. As you'll see from that story, all they wanted was for us to meet the goals that we put down on our research goals forms. I was told, face to face, that the idea was that this would make us put our efforts into the projects that were most likely to succeed. Who could object to that? Right?
But since we here in the drug industry are so focused on making money, y'know, you'd think that we would have even more incentives to make sure that we're only working on the things that are likely to pay off. And we can't do it. Committees vet proposals, managers look over progress reports, presentations are reviewed and data are sifted, all to that end, because picking the wrong project can sink you good and proper, while picking the right one can keep you going for years to come. But we fail all the time. A good 90% of the projects that make it into the clinic never make it out the other end, and the attrition even before getting into man is fierce indeed. We back the wrong horses for the best reasons available, and sometimes we back the right ones for reasons that end up evaporating along the way. This is the best we can do, the state of the art, and it's not very good at all.
And that's in applied research, with definite targets and endpoints in mind the whole way through. Now picture what it's like in the basic research end of things, which is where a lot of NSF and NIH money is (and should be) going. It is simply not possible to say where a lot of these things are going, and which ones will bear fruit. If you require everyone to sign forms saying that Yes, This Project Has Immediate Economic and National Security Impact, then the best you can hope for is to make everyone lie to you.
Update: a terrific point from the comments section: "(This) argument was often made when firms were reducing costs by shutting down particular pieces of R&D. The general idea was that the firm would stop doing the things that were unlikely to work, and focus more on the things that would work, and hence improve financial returns on R&D. This argument is implausible because successful R&D is wildly profitable. Financial returns are only dragged down by the things that don't work. Therefore, any company that could REALLY distinguish with any precision between winners and losers on a prospective basis should double or triple its R&D investment, and not cut it."
This is a bad idea: Representative Lamar Smith (R-TX) is circulating a draft of a bill to change the way the National Science Foundation reviews grant applications. Science magazine obtained a copy of the current version, and it would require the NSF to certify that all research it funds is:
1) "…in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
2) "… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
3) "…not duplicative of other research projects being funded by the Foundation or other Federal science agencies."
If we could fund things this way, we would be living in a different world entirely. Research, though, does not and cannot follow these guidelines. A lot of stuff gets looked into that doesn't work out, and a lot of things that do work out don't look like they're ever going to be of much use for anything. We are not smart enough to put bets down on only the really important stuff up front - and by "we", I mean the entire scientific community, and the director of the NSF, and even Representative Lamar Smith.
Useless and even bizarre things get funded under the current system, of that I have no doubt. But telling everyone that all research has to be certified as good for something is silly grandstanding. What will happen is that people will rewrite their grant applications in order to make them look attractive under whatever rules apply - which, naturally, is how it's always worked. So I'm not saying that Rep. Smith's proposal would Destroy Science in America. That would take a lot more work. No, what I'm saying is that Rep. Smith's view of the world is flawed. He seems to believe that legislation of this sort is the answer to large, difficult problems (witness his work on the Stop Online Piracy Act). As such, he would seem to be exactly the sort of person that I wish could be barred from serving as an elected official.
If I were Lamar Smith, I would probably be thinking of a bill that I could introduce to that effect (the Stop Overreaching Legislators Act?) But I'm not the sort of person who thinks that the world can be fixed up by passing the right laws and signing the right papers. I'm more in line with Mark Twain, when he said that no one's life, liberty, or property was safe while the legislature was in session.
A couple of years back, I wrote about the egregious research fraud case of Diederick Stapel. Here's an extraordinary follow-up in the New York Times Magazine, which will give you the shivers. Here, try this part out:
In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would — through an automatic comparison — rate themselves as less attractive than subjects exposed to the other image.
The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me. . .
. . .Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.
He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. “I realized — hey, we can do this,” he told me.
And that's just what he did, for the next several years, leading to scores of publications and presentations on things he had just made up. In light of that Nature editorial statement I mentioned yesterday, this part seems worth thinking on:
. . . The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
The adjective “sloppy” seems charitable. . .
It may well be. The temptation of spicing up the results is always there, in any branch of science, and it's our responsibility to resist it. That means not only resisting the opportunities to fool others, it means resisting fooling ourselves, too, because who would know better what we'd really like to hear? Reporting only the time that the idea worked, not the other times when it didn't. Finding ways to explain away the data that would invalidate your hypothesis, but giving the shaky stuff in your favor the benefit of the doubt. N-of-1 experiments taken as facts. No, not many people will go as far as Diederick Stapel (or could, even if they wanted to - he was quite talented at fakery). Unfortunately, things go on all the time that might differ from him in degree, but not in kind.
I wanted to mention a project of Prof. Phil Baran of Scripps and his co-authors, Yoshihiro Ishihara and Ana Montero. It's called the Portable Chemist's Consultant, and it's available for iPads here. And here's a web-based look at its features. Baran was good enough to send me an evaluation copy, so I've had a chance to look through it in detail.
It's clearly based on his course in heterocyclic chemistry, and the chapters on pyridines and other heterocycles read like very well-thought-out review articles. But they also take advantage of the iPad's interface, in that specific transformations are shown in detail (with color and animation), and each of these can be expanded to a wider presentation and a thorough list of references (which are linked in their turn). The "Consumer Reports" style tables of recommended synthetic methods at the end of each section seem very useful, too, although they might need some notation for how much experimental support there is for each combination. For an overview of these topics, though, I doubt if anyone could do this better; I became a more literate heterocyclic chemist just by flipping through things. (Here's a video clip of some of these features in action).
So, do I have any reservations? A few. One of the bigger ones (which I'm told that Baran and his team are addressing) might sound trivial: I'm not sure about the title. As it stands, "The Portable Heterocyclic Chemistry Consultant" would be a much more accurate one, because there are large swaths of chemistry that fall within its current subtitle ("A Survival Guide for Discovery, Process, and Radiolabeling") which are not even touched on. For example, scale-up chemistry is mentioned on the cover, but in the current version of the book I didn't really see anything that was of particular relevance to actual scale-up work (things like the feasibility of solvent switching, heat transfer effects and reaction thermodynamics, run-to-run variability and potential purification methods, reagent sourcing, etc.) For medicinal chemists, I can say that the focus is completely on just the synthetic organic end of things; there's nothing on the behavior of any of the heterocyclic systems in vivo (pharmacokinetic trends, routes of metabolism, known toxicity problems, and so on). There's also nothing on spectral characterization, or any analytical chemistry of any sort, and I found no mention of radiolabeling (although I'd be glad to be corrected on that).
So for these reasons, it's a very academic work, but a very good one of its type. And Prof. Baran tells me that it's being revised constantly (at no charge to previous purchasers), and that these sorts of topics are in the works for later versions. If this book is indeed one of those gifts that keeps on giving, then it's a bargain as it stands, but (at the same time) I think that potential buyers should be aware of what they're getting in the current version.
My second reservation is technological. The book is only available on the iPad, and I'm not completely sure that this is a good idea. There's no way that it could be as useful in print, but a web-based interface would still be fine. (Managing ownership and sales is a lot easier in Apple's ecosystem, to be sure). And I'm not sure how many organic chemists own iPads yet. Baran himself seemed a bit surprised when he found out that I don't own one myself (I borrowed a colleague's to have a look). The most common reaction I've had when I tell people about the "PCC" is to say that they don't own an iPad, either, and to ask if there's any other way they can read it. Another problem is that the people that do have iPads certainly don't take them to the lab bench, which is where a work like this would be most useful. On the other hand, plain old computers are ubiquitous at the bench, thanks to electronic lab notebooks and the like.
All this said, though, if you do own an iPad and need to know about heterocyclic chemistry, you should have a look at this work immediately. If not, well, it's well worth keeping an eye on - these are early days.
Earlier this year, I wrote about a method to do NMR experiments at the cellular level or below. A new paper uses this same phenomenon (nitrogen-vacancy defects near the surface of diamond crystals) to do magnetic imaging of individual bacteria.
It's well known that many bacteria have "magnetosome" structures that allow them to sense and react to magnetic fields. If you let them wander over the surface of one of these altered diamond crystals, you can use the single-atom unpaired electrons as sensors. This team (several groups at Harvard and at Berkeley) were able to get sub-cellular resolution, and correlate that with real-time optical images of the bacteria (Magnetospirillum magneticum). It's very odd to see images of single bacteria with their field strengths looking like little bar magnets, but there they are. What we'll find by looking at magnetic fields inside individual cells, I have absolutely no idea, but I hope for all kinds of interesting and baffling things. I wonder what you'd get when mammalian cells take up magnetic nanoparticles, for example?
In other news, it's already late April, and things are already far enough along for me to talk about something on the blog as having happened "earlier this year". Sheesh.
This has to be a good thing. From the latest issue of Nature comes news of an initiative to generate more reproducible papers:
From next month, Nature and the Nature research journals will introduce editorial measures to address the problem by improving the consistency and quality of reporting in life-sciences articles. To ease the interpretation and improve the reliability of published results we will more systematically ensure that key methodological details are reported, and we will give more space to methods sections. We will examine statistics more closely and encourage authors to be transparent, for example by including their raw data. . .
. . .We recognize that there is no single way to conduct an experimental study. Exploratory investigations cannot be done with the same level of statistical rigour as hypothesis-testing studies. Few academic laboratories have the means to perform the level of validation required, for example, to translate a finding from the laboratory to the clinic. However, that should not stand in the way of a full report of how a study was designed, conducted and analysed that will allow reviewers and readers to adequately interpret and build on the results.
I hope that Science, the Cell journals at Elsevier, and other other leading outlets for such results will follow through with something similar. In this time of online supplementary info and basically unlimited storage ability, there's no reason not to disclose as much information as possible in a scientific publication. And the emphasis on statistical rigor and possible sources of error is just what's needed as well. Let's see who follows suit first, and congratulate them. And let's see who fails to respond, and treat them appropriately, too.
A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.
When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.
So far, so good. But here's where, conceptually, things start going into the ditch:
Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.
Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:
Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .
. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.
Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".
What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.
I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.
The University of Chicago Press has sent along a copy of a new book by DePaul professor Ted Anton, The Longevity Seekers. It's a history of the last thirty years or so of advances in understanding the biochemical pathways of aging. As you'd imagine, much of it focuses on sirtuins, but many other discoveries get put into context as well. There are also thoughts on what this whole story tells us about medical research, the uses of model animal systems, about the public's reaction to new discoveries, and what would happen if (or when) someone actually succeeds in lengthening human lifespan. (That last part is an under-thought topic among people doing research in the field, in my experience, at least in print).
Readers will be interested to note that Anton uses posts and comments on this blog as source material in some places, when he talks about the reaction in the scientific community to various twists and turns in the story. (You'll be relieved to hear that he's also directly interviewed almost all the major players in the field, as well!) If you're looking for a guide to how the longevity field got to where it is today and how everything fits together so far, this should get you up to speed.
Here's something that's been sort of a dream of medicinal chemists and pharmacologists, and now can begin to be realized: single-cell pharmacokinetics. For those outside the field, you should know that we spend a lot of time on our drug candidates, evaluating whether they're actually getting to where we want them to. And there's a lot to unpack in that statement: the compound (if it's an oral dose) has to get out of the gut and into the bloodstream, survive the versatile shredding machine of the liver (which is where all the blood from from the gut goes first), and get out into the general circulation.
But all destinations are not equal. Tissues with greater blood flow are always going to see more of any compound, for starters. Compounds can (and often do) stick to various blood components preferentially (albumin, red blood cells themselves, etc.), and ride around that way, which can be beneficial, problematic, or a complete non-issue, depending on how the med-chem gods feel about you that week. The brain is famously protected from the riff-raff in the blood supply, so if you want to get into the CNS, you have more to think about. If your compound is rather greasy, it may find other things it likes to stick to rather than hang around in solution anywhere.
And we haven't even talked about the cellular level yet. Is your target on the outside of the cells, or do you have to get in? If you do, you might find your compounds being pumped right back out. There are ongoing nasty arguments about compounds being pumped in in the first place, too, as opposed to just soaking through the membranes. The inside of a cell is a strange place, too, once you're there. The various organelles and structures all have their own affinities for different sorts of compounds, and if you need to get into the mitochondria or the nucleus, you've got another membrane barrier to cross.
At this point, things really start to get fuzzy. It's only been in recent years that it's been possible to follow the traffic of individual species inside a cell, and it's still not trivial, by any means. Some of the techniques used to do it (fluorescent tags of various kinds) also can disturb the very systems you're trying to study. This latest paper uses such a fluorescent label, so you have to keep that in mind, but it's still quite impressive. The authors took a poly(ADP) ribose polymerase 1 (PARP1) inhibitor (part of a class that has had all kinds of trouble in the clinic, despite a lot of biological rationale), attached a fluorescent tag, and watched in real time as it coursed through the vasculature of a tumor (on a time scale of seconds), soaked out into the intracellular space (minutes), and was taken up into the cells themselves (within an hour). Looking more deeply, they could see the compound accumulating in the nucleus (where PARP1 is located), so all indications are that it really does reach its target, and in sufficient amounts to have an effect.
But since it doesn't, there must be something about PARP1 and tumor biology that we're not quite grasping. Inhibiting DNA repair by this mechanism doesn't seem to be the death blow that we'd hoped for, but we now know that that's the place to figure out the failure of these inhibitors. Blaming some problems of delivery and distribution won't cut it.
Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.
Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:
Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.
And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).
One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.
And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.
Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .
There's a possible new area for drug discovery that's coming from a very unexpected source: enzymes that don't do anything. About ten years ago, when the human genome was getting its first good combing-through, one of the first enzyme categories to get the full treatment were the kinases. But about ten per cent of them, on closer inspection, seemed to lack one or more key catalytic residues, leaving them with no known way to be active. They were dubbed (with much puzzlement) "pseudokinases", with their functions, if any, unknown.
As time went on and sequences piled up, the same situation was found for a number of other enzyme categories. One family in particular, the sulfotransferases, seems to have at least half of it putative members inactivated, which doesn't make a lot of sense, because these things also seem to be under selection pressure. So they're doing something, but what?
Answer are starting to be filled in. Here's a paper from last year, on some of the possibilities, and this article from Science is an excellent survey of the field. It turns out that many of these seem to have a regulatory function, often on their enzymatically active relations. Some of these pseudoenzymes retain the ability to bind their original substrates, and those events may also have a regulatory function in their downstream protein interactions. So these things may be a whole class of drug targets that we haven't screened for - and in fact may be a set of proteins that we're already hitting with some of our ligands, but with no idea that we're doing so. I doubt if anyone in drug discovery has ever bothered counterscreening against any of them, but it looks like that should change. Update: I stand corrected. See the comment thread for more.
This illustrates a few principles worth keeping in mind: first, that if something is under selection pressure, it surely has a function, even if you can't figure out how or why. (A corollary is that if some sequence doesn't seem to be under such constraints, it probably doesn't have much of a function at all, but as those links show, this is a contentious topic). Next, we should always keep in mind that we don't really know as much about cell biology as we think we do; there are lots of surprises and overlooked things waiting for us. And finally, any of those that appear to have (or retain) small-molecule binding sites are very much worth the attention of medicinal chemists, because so many other possible targets have nothing of the kind, and are a lot harder to deal with.
From Naturecomes this news of an effort to go back to oncology clinical trials and look at the outliers: the people who actually showed great responses to otherwise failed drugs.
By all rights, Gerald Batist’s patient should have died nine years ago. Her pancreatic cancer failed to flinch in the face of the standard arsenal — surgery, radiation, chemotherapy — and Batist, an oncologist at McGill University in Montreal, Canada, estimated that she had one year to live. With treatment options dwindling, he enrolled her in a clinical trial of a hot new class of drugs called farnesyltransferase inhibitors. Animal tests had suggested that the drugs had the potential to defeat some of the deadliest cancers, and pharmaceutical firms were racing to be the first to bring such compounds to market.
But the drugs flopped in clinical trials. Companies abandoned the inhibitors — one of the biggest heartbreaks in cancer research over the past decade. For Batist’s patient, however, the drugs were anything but disappointing. Her tumours were resolved; now, a decade later, she remains cancer free. And Batist hopes that he may soon find out why.
That's a perfect example, because pancreatic cancer has a well-deserved reputation as one of the most intractable tumor types, and the farnesylation inhibitors were indeed a titanic bust after much anticipation.. So that combination - a terrible prognosis and an ineffective class of compounds - shouldn't have led to anything, but it certainly seems to have in that case. If there was something odd about the combination of mutations in this patient that made her respond, could there be others that would as well? It looks as if that sort of thing could work:
Early n-of-1 successes have bolstered expectations. When David Solit, a cancer researcher also at Memorial Sloan-Kettering, encountered an exceptional responder in a failed clinical trial of the drug everolimus against bladder cancer, he decided to sequence her tumour. Among the 17,136 mutations his team found, two stood out — mutations in each of these genes had been shown to make cancer growth more dependent on the cellular pathway that everolimus shut down1. A further search revealed one of these genes — called TSC1 — was mutated in about 8% of 109 patients in their sample, a finding that could resurrect the notion of using everolimus to treat bladder cancer, this time in a trial of patients with TSC1 mutations.
So we are indeed heading to that dissection of cancer into its component diseases, which are uncounted thousands of cellular phenotypes, all leading to unconstrained growth. It's going to be quite a slog through the sequencing jungle along the way, though, which is why I don't share the optimism of people like Andy von Eschenbach and others who talk about vast changes in cancer therapy being just about to happen. These n-of-1 studies, for example, will be of direct benefit to very few people, the ones who happen to have rare and odd tumor types (that looked like more common ones at first). But tracking these things down is still worthwhile, because eventually we'll want to have all these things tracked down. Every one of them. And that's going to take quite a while, which means we'd better get starting on the ones that we know how to do.
And even then, there's going to be an even tougher challenge: the apparently common situation of multiple tumor cells types in what looks (without sequencing) like a single cancer. How to deal with these, in what order, and in what combinations - now that'll be hard. But not impossible and "not impossible" is enough to go on. Like Francis Bacon's "New Atlantis", what we have before us is the task of understanding ". . .the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible". Just don't put a deadline on it!
Over at NextMove software, they have an analysis of what kinds of reactions are being run most often inside a large drug company. Using the company's electronic notebook database and their own software, they can get a real-world picture of what people spend their time on at the bench.
The number one reaction is Buchwald-Hartwig amination. And that seems reasonable to me; I sure see a lot of those being run myself. The number two reaction is reduction of nitro groups to amines, which surprises me a bit. There certainly are quite a few of those - the fellow just down the bench from me was cursing at one just the other day - but I wouldn't have pegged it as number two overall. Number three was the good old Williamson ether synthesis, and only then do we get to the reaction that I would have thought would beat out either of these, N-acylation. After that comes sulfonamide formation, and that one is also a bit of a surprise. Not that there aren't a lot of sulfonamides around, far from it, but I was under the impression that a lot of organizations gave the the semi-official fish-eye, due to higher-than-average rates of trouble (PK and so on) down the line.
My first thought was that there might have been some big and/or recent projects that skewed the numbers around a bit. These sorts of data sets are always going to be lumpy, in the same way that compound collections tend to be (and for the same reasons). The majority of compounds (and reactions) pile up when a great big series of active compounds comes along with Structure X made via Reaction Scheme Y. But that, in a way, is the point: different organizations might have a slightly different rank-ordering, but it seems a safe bet that the same eight or ten reactions would always make up most of the list. (My candidate for number 6, the next one down on the above list: Suzuki coupling).
There's also a pie chart of the general reaction types that are run most often. The biggest category is heteroatom alkylation and arylation, followed by acylation in general. By the time you've covered those two, you've got half the reactions in the database. Next up is C-C bond formations (there are those Suzukis, I'll bet) and reductions. (Interestingly. oxidations are much further down the list). That same trend was noted in an earlier analysis of this sort, and nitro-to-amine reactions were thought to be the main reason for it, as seems to be the case here. There's at least one more study of this sort that I'm aware of, and it came to similar conclusions.
One of the things that might occur to an academic chemist looking over these data is that none of these are exactly the most exciting reactions in the world. That's true, and that's the point. We don't want exciting chemistry, because "exciting" means that it has a significant chance of not working. Our reactions are dull as the proverbial ditchwater (and often about the same color), because the excitement of not knowing whether something is going to pan out or not is deferred a bit down the line. Just getting the primary assay data back on the compounds you just made is often an exercise in finger-crossing. Then waiting to see if your lead compound made it through two-week tox, now that's exciting. Or the first bit of Phase I PK data, when the drug candidate goes into a person's mouth for the first time. Or, even more, the initial Phase II numbers, when you find out if it might actually do something for somebody's who's sick. Now those have all the excitement that you could want, and often quite a bit more. With that sort of unavoidable background, the chemistry needs to be as steady and reliable as it can get.
Things are a bit. . .unusual around here today. I'm home; my company (and others in Cambridge) called about 6 AM to tell all employees to stay put. What with mass transit shut down and everyone off the streets, I can see the point! And truth be told, I feel a bit odd, knowing that the gunfire, etc. last night started a few blocks from where I work. This is all happening miles to the east of where I live, but it still looks like a good day to stay off the roads. . .
Just as a quick example of how odd molecular recognition can be, have a look at this paper from Chemical Communications. It's not particularly remarkable, but it's a good example of what's possible. The authors used a commercial phage display library (this one, I think) to run about a billion different 12-mer peptides past the simple aromatic hydrocarbon naphthalene (immobilized on a surface via 2-napthylamine). The usual phage-library techniques (several rounds of infection into E. coli followed by more selectivity testing against bound naphthalene and against control surfaces with no ligand) gave a specific 12-mer peptide. It's HFTFPQQQPPRP, for those who'd like to make some. Note: I typo-ed that sequence the first time around, giving it only one phenylalanine, unhelpfully.
Now, an oligopeptide isn't the first thing you'd imagine being a selective binder to a simple aromatic hydrocarbon, but this one not only binds naphthalene, but it has good selectivity versus benzene (34-fold), while anthracene and pyrene weren't bound at all. From the sequence above, those of you who are peptide geeks will have already figured out roughly how it does it: the phenylalanines are pi-stacking, while the proline(s) make a beta-turn structure. Guessing that up front would still not have helped you sort through the possibilities, it's safe to say, since that still leaves you with quite a few.
But the starting phage library itself doesn't cover all that much diversity. Consider 20 amino acids at twelve positions: 4.096 times ten to the fifteenth. The commercial library covers less than one millionth of the possible oligopeptide space, and we're completely ignoring disulfide bridges. To apply the well-known description from the Hitchhiker's Guide to the Galaxy, chemical space is big. "Really big. You just won't believe how vastly, hugely, mindbogglingly big it is. . ."
I've linked to some very skeptical takes on the ENCODE project, the effort that supposedly identified 80% of our DNA sequence as functional to some degree. I should present some evidence for the other side, though, as it comes up, and some may have come up.
Two recent papers in Cell tell the story. The first proposes "super-enhancers" as regulators of gene transcription. (Here's a brief summary of both). These are clusters of known enhancer sequences, which seem to recruit piles of transcription factors, and act differently from the single-enhancer model. The authors show evidence that these are involved in cell differentiation, and could well provide one of the key systems for determining eventual cellular identity from pluripotent stem cells.
Interest in further understanding the importance of Mediator in ESCs led us to further investigate enhancers bound by the master transcription factors and Mediator in these cells. We found that much of enhancer-associated Mediator occupies exceptionally large enhancer domains and that these domains are associated with genes that play prominent roles in ESC biology. These large domains, or super-enhancers, were found to contain high levels of the key ESC transcription factors Oct4, Sox2, Nanog, Klf4, and Esrrb to stimulate higher transcriptional activity than typical enhancers and to be exceptionally sensitive to reduced levels of Mediator. Super-enhancers were found in a wide variety of differentiated cell types, again associated with key cell-type-specific genes known to play prominent roles in control of their gene expression program
On one level, this is quite interesting, because cellular differentiation is a process that we really need to know a lot more about (the medical applications are enormous). But as a medicinal chemist, this sort of news sort of makes me purse my lips, because we have enough trouble dealing with the good old fashioned transcription factors (whose complexes of proteins were already large enough, thank you). What role there might be for therapeutic intervention in these super-complexes, I couldn't say.
The second paper has more on this concept. They find that these "super-enhancers" are also important in tumor cells (which would make perfect sense), and that they tie into two other big stories in the field, the epigenetic regulator BRD4 and the multifunctional protein cMyc:
Here, we investigate how inhibition of the widely expressed transcriptional coactivator BRD4 leads to selective inhibition of the MYC oncogene in multiple myeloma (MM). BRD4 and Mediator were found to co-occupy thousands of enhancers associated with active genes. They also co-occupied a small set of exceptionally large super-enhancers associated with genes that feature prominently in MM biology, including the MYC oncogene. Treatment of MM tumor cells with the BET-bromodomain inhibitor JQ1 led to preferential loss of BRD4 at super-enhancers and consequent transcription elongation defects that preferentially impacted genes with super-enhancers, including MYC. Super-enhancers were found at key oncogenic drivers in many other tumor cells.
About 3% of the enhancers found in the multiple myeloma cell line turned out to be tenfold-larger super-enhancer complexes, which bring in about ten times as much BRD4. It's been recently discovered that small-molecule ligands for BRD4 have a large effect on the cMyc pathway, and now we may know one of the ways that happens. So that might be part of the answer to the question I posed above: how do you target these things with drugs? Find one of the proteins that it has to recruit in large numbers, and mess up its activity at a small-molecule binding site. And if these giant complexes are even more sensitive to disruptions in these key proteins than usual (as the paper hypothesizes), then so much the better.
It's fortunate that chromatin-remodeling proteins such as BRD4 are (at least in some cases) filling that role, because they have pretty well-defining binding pockets that we can target. Direct targeting of cMyc, by contrast, has been quite difficult indeed (here's a new paper with some background on what's been accomplished so far).
Now, to the level of my cell biology expertise, the evidence that these papers have looks reasonably good. I'm certainly willing to believe that there are levels of transcriptional control beyond those that we've realized so far, weary sighs of a chemist aside. But I'll be interested to see the arguments over this concept play out. For example, if these very long stretches of DNA turn out indeed to be so important, how sensitive are they to mutation? One of the key objections to the ENCODE consortium's interpretation of their data is that much of what they're calling "functional" DNA seems to have little trouble drifting along and picking up random mutations. It will be worth applying this analysis to these super-regulators, but I haven't seen that done yet.
There's another paper in the Nature Chemical Biology special issue that I wanted to mention, this one on "Translational Synthetic Chemistry". I can't say that I like the title, which seems to me to have a problem with reification (the process of trying to make something a thing which isn't necessarily a thing at all). I'm not so sure that there is a separate thing called "Translational Synthetic Chemistry", and I'm a bit worried that it might become a catch phrase all its own, which I think might lead to grief.
But that said, I still enjoyed the article. The authors are from H3 Biomedicine in Cambridge, which as I understand it is an offshoot of the Broad Institute and has several Schreiber-trained chemists on board. That means Diversity-Oriented Synthesis, of course, which is an area that I've expressed reservations about before. But the paper also discusses the use of natural product scaffolds as starting materials for new chemical libraries (a topic that's come up here and here), and the synthesis of diverse fragment collections beyond what we usually see. "Fragments versus DOS" has been set up before as a sort of cage match, but I don't think that has to be the case. And "Natural products versus DOS" has also been taken as a showdown, but I'm not so sure about that, either. These aren't either/or cases, and I don't think that the issues are illuminated by pretending that they are.
The authors end up calling for more new compound libraries, made by more new synthetic techniques, and assayed by newer and better high-throughput screens. Coming out against such recommendations makes a person feel as if they're standing up to make objections against motherhood and apple pies. And it's not that I think that these are bad ideas, but I just wonder if they're sufficient. Chemical space, as we were discussing the other day, is vast - crazily, incomprehensibly vast. Trying to blast off into it at random (which is what the pure DOS approaches have always seemed like to me) looks like something that a person could do for a century or two without seeing much return.
So if there are ways to increase the odds, I'm all for them. Natural-product-like molecules look like as good a way as any to do this, since they at least have the track record of evolution on their side. Things that are in roughly these kinds of chemical space, but which living organisms haven't gotten around to making, are still part of a wildly huge chemical space, but one that might have somewhat higher hit rates in screening. So Paul Hergenrother at Illinois might have the right idea when he uses natural products themselves as starting materials and makes new compound libraries from them.
So, who else is doing something like that? And what other methods do we have to make "natural-product-like" structures? Suggestions are welcome, and I'll assemble them and any ideas I have into another post.
I'm still trying to figure out if anyone I know personally was injured during yesterday's bombing of the Boston Marathon. So far, it's just been a couple of close calls. As it happened, I was out of town yesterday, and only saw the news in the early evening.
What sort of explosive chemistry was used might provide some clues about the people who did this - different groups have different ideas about what makes the best catastrophe. What sort of thinking allows a human being to go ahead with an act like this - bombing a festive crowd of innocent spectators and families on a spring afternoon - is beyond my comprehension, though.
I'll be traveling today, so I probably won't have another post up. My condolences to everyone affected by this act, and for those who perpetrated it, honi soit qui mal y pense, in the sense of "Evil to those who think evil".
Well, although this is only the 15th of the month, today I'll already break my monthly traffic record for the site. That's thanks to a link from xkcd's "What If" page, which led to a couple of mentions on Reddit, which led to a front-page link at ycombinator's Hacker News, and who knows what else. Update: such as a mention on Twitter from Adam Savage of Mythbusters! Many thanks to everyone who's stopped by, and especially to those who have been linking!
Here's another look at the vast universe of things that no chemist has ever made. Estimates of the number of compounds with molecular weights under 500 run as high as ten to the sixtieth, which is an incomprehensibly huge number. We're not going to be able to put any sort of dent in that figure even if we convert the whole mass of the solar system into compound sample vials, so the problem remains: what's out there in that territory, and how do we best approach it?
Well, numbers of that magnitude are going to need some serious computation paring-down before we can take a crack at them, and that's what this latest paper tries to do. I'll refer interested readers to it (and to its supplementary information) for the details, but in brief, it takes a seed structure or two, adds atoms to them, goes through rounds of mutations and parings (according to filters that can be set for functional groups, properties, etc.) and then sends the whole set back around for more. This is going to rapidly explode in size, naturally, so at each stage the program picks a maximally diverse subset to go on with and discards the rest.
There are some of the compounds that come out, just to give you the idea. And they're right; I never would have thought of some of these, and I hope some of them never cross my mind again. I presume that this set has been run with rather permissive structural filters, because there are things there that (1) I don't know how to make, and (2) I'm not sure if anyone else knows how to make yet, and (3) I'm not sure how stable and isolable they'd be even if anyone did. My first reaction is that there sure are a lot of acetals, ketals, hemithioketals and so on in this set, but I'm sure that's an artifact of some sort. Any selection of a set of 10^60 compounds is an artifact of some sort.
So my next question is, what might people use such a program for? Ideas that they wouldn't have come up with, something to stir the imagination? Synthetic challenges to try for, to realize some of these compounds? The authors point out that neither nature nor man has ever really taken advantage of chemical diversity, not compared to what's possible. And that's true, but the possible numbers of compounds are still so terrifying that I wonder what we'll accomplish with drops in the bucket. (There's another paper that bears on this that I'll comment on later this week; this theme will return shortly!)
Nano-everything has been the rule for several years now, to judge from press releases and poster abstracts. But here's an article in Nature Reviews Drug Discovery that's wondering what, exactly, "nanomedicine" has offered so far:
. . .Indeed, by some quantitative measures, the field is flourishing; over the past decade there has been an explosive growth in associated publications, patents, clinical trials and industry activity. For example, a search of the worldwide patent literature using 'nanoparticle and drug' resulted in over 30,000 hits. . .
New biomedical technologies have often undergone a similar life cycle. Initially, exciting pioneering studies result in a huge surge of enthusiasm in academia and in the commercial arena. Then, some of the problems and limitations inherent in the technology emerge, the initial enthusiasm is deflated, and many players leave the field. A few enthusiasts persist and eventually the technology finds its appropriate place in research as well as in clinical and commercial applications. It seems possible that nanomedicine is now verging on the phase of disillusionment.
That's exactly the cycle, and what's never clear is how steep the peaks and valleys are. Some of those deflationary cycles go so deep as to take out the entire field (which may or may not be rediscovered years later). That's not going to happen with nanoparticle drug delivery, but it's certainly not going to make everyone rich overnight. As the article goes on to detail, these formulations are expensive to make (and have tricky quality control issues), and they're not magic bullets for drug delivery across membranes, either. So far, the record of the ones that have made it to market is mixed:
. . .Addressing these challenges would be strongly justified if major benefits were to accrue to patients. But is this happening? Although we do not know the potential benefits of the nanomedicines currently under development, we can examine the early-generation nanoparticle drugs that entered the clinic in the 1990s and 2000s, such as the liposomal agents Doxil and Ambisome and the protein–drug nanocomplex Abraxane. These agents are far more costly than their parent drugs (doxorubicin, amphotericin B and paclitaxel, respectively). Furthermore, these nanomedicines made their mark in the clinic primarily by reducing toxicity rather than improving efficacy. . .
The big question is whether these toxicity reductions warrant the increased prices, and the answer isn't always obvious. The current generation of nanoformulations are different beasts, in many cases, and it's too early to say how they'll work out in the real world. But if you Google the words "drug nanoparticles revolution", you get page after page of stuff, and clearly not all of it is going to perform as hoped. Funding seems to be cresting (or have crested) for this sort of thing for now, and I think that the whole field will have to prove itself some more before it climbs back up again.
Now here's one that I didn't know about: a reader sends along word that the former clinical candidate GW501516 is enjoying some popularity on the black market among cyclists and other athletes.
I remember that compound well from the days when I did PPAR nuclear receptor research. It's the very model of a PPAR-delta ligand - GlaxoSmithKline had it in the clinic for some time, until it slowly disappeared from their roster. In 2007, the Evans lab at the Salk Institute published a paper suggesting that the compound increased endurance, and that sent it right into the athletic underworld. I have no idea if it does what its users want, but I do know that I wouldn't touch the stuff. The PPAR compounds have a very, very wide range of effects, and unraveling those proved to be very difficult indeed. Long-term effects of a compound like this one are unknown - all we know is that GSK dropped it from the clinic, and that could well have been for tox. Taking this stuff to gain some time in a bicycle race is sheer foolhardiness.
Here's an excellent look back by venture capitalist Bruce Booth at one of the companies his firm funded. But this isn't one of those we-exited-with-a-thirtyfold-return stories. On-Q-ity, a diagnostic play, has unfortunately just folded.
There were several reasons for this, but I'd guess that the ones below really, really didn't help:
. . .By mid-2010, only six months after the Series A came together, it was clear that the DNA repair biomarkers were going to be tough, as an early trial failed to reproduce the nice Kaplan-Meyer curves of the original academic work. By late 2010/early 2011, two more larger trials read out negatively so we decided to terminate that effort. But unfortunately those trials and the biomarker lab work required to support them consumed 60%+ of the capital in the Series A round.
Not much had gone into the CTC platform in that first year and so early in 2011 the company refocused exclusively on CTCs and streamlined the team, but the clock was ticking. As we dug in to the status of the CTC platform, it was very clear that lots more work needed to be done – the paper descriptions of what it was supposed to deliver didn’t map to the platform’s actual robustness (or lack thereof) at that time. Antibodies that were supposedly functional turned out not to work, and several other things like this. . .
This looks like yet another example of something that never worked as well in the real world as it did in the publications. Bruce himself has blogged about this problem, which shows you that it's lying in wait for everyone trying to make something out of new discoveries. I recommend the whole post, especially for anyone working at a small startup or thinking about doing so. It shows you some things to stay alert for, and there are many.
The advent of real X-ray structures for receptors means that there are many experimental approaches that can now be tried that earlier would have been (most likely) foolhardy. My first research in the industry was on dopamine receptors, which I followed up by a stint on muscarinics, and we really did try to be rational drug designers. But that meant homology models and single-point mutations, and neither of those was always as helpful as you'd like. OK, fine: neither of them were very helpful at all, when you got right down to it. We kept trying to understand how our compounds were binding, but outside of the obvious GPCR features - gotta have a basic amine down there - we didn't get very far.
That's not to say that we didn't make potent, selective compounds. We certainly did, although you'll note that I'm not using the word "drug". For many of them, even the phrase "plausible clinical candidate" is difficult to get out with a straight face, potent and selective though they may have been. We made all these compounds, though, the old-fashioned way: straight SAR, add this on and take that away, fill out the table. Structural biology insights didn't really drive things much.
So when the transmembrane receptor X-ray structures began to show up, my first thought was whether or not they would have helped in that earlier effort, or whether they still had enough rough edges that they might have just helped to mislead us into thinking that we had things more figured out. There's a report, though, in the latest J. Med. Chem. that puts such structures to a pretty good test: can you use them to do fragment-based drug discovery?
Apparently so, at least up to the point described. This is the most complete example yet reproted of FBDD on a G-protein coupled receptor (beta-1 adrenergic). Given the prominence of receptors as drug targets, the late advent of fragment work in this field should tell you something about how important it is to have good structural information for a fragment campaign. I'm not sure if I've ever heard of one being successful without it - people say that it can be done, but I certainly wouldn't want to be the person doing it. That's not to say that X-ray structures are some sort of magic wand (this review should disabuse a person of that notion) - just that they're "necessary, but not sufficient" for getting a fragment program moving at reasonable speed. Otherwise, the amount of fumbling around at the edge of assay detection limits would be hard to take.
The beta-adrenergic receptor is the one with the most X-ray data available, with several different varieties of agonists and antagonists solved. So if any GPCR is going to get the fragment treatment, this would be the one. (There's also been a recent report of a fragment found for an adenosine receptor, which was largely arrived at through virtual screening). In this case, the initial screening was done via SPR (itself a very non-trivial technique for this sort of thing), followed by high-concentration radioligand assays, and eventual X-ray structure. They found a series of arylpiperazines, which are thoroughly believable as GPCR hits, although they don't have much of a history at the adrenergic receptor itself. The compounds are probably antagonists, mainly because they aren't making enough interactions to flip the switch to agonist, or not yet.
This paper only takes things up to this point, which is still a lot farther than anyone would have imagined a few years ago. My guess is that FBDD is still not ready for the spotlight in this field, though. This paper is from Miles Congreve and the folks at Heptares, world experts in GPCR crystallography, and presumably represents something pretty close to the state of the art. It's a proof-of-concept piece, but until the structures of more difficult receptors are available with more regularity, I don't think we'll see too much fragment work in the area. I'd be happy to be wrong about that.
There's a comment made by CellBio to the recent post on phenotypic screening that I wanted to highlight I think it's an important point:
In drug discovery, we need fewer biologists dedicated to their biology, and more pharmacologists dedicated to testing the value of compounds.
He's not the first one to bemoan the decline of classic pharmacology. What we're talking about are the different answers to the (apparently) simple question, "What does this compound do?" The answer you're most likely to hear is something like "It's a such-and-such nanomolar Whateverase IV inhibitor". But the question could also be answered by saying what the compound does in cells, or in whole animals. It rarely is, though. We're so wedded to mechanisms and targets that we organize our thinking that way, and not always to our benefit.
In the case of the compound above, its cell activity may well be a consequence of its activity against Whateverase IV. If you have some idea of that protein's place in cellular processes, you might be fairly confident. But you can't really be sure about it. Do you have enzyme assays counterscreening against Whateverases I through V? How about the other enzymes with similar active sites? What, exactly, do all those things do in the cell if you are hitting them? Most of the time - all of the time, to be honest - we don't know the answers in enough detail.
So when people ask "What's this compound do?", what they really asking, most of the time, is "What does it do against the target that it was made for?" A better question would be "What does it do against everything it's been tested against?" But the most relevant questions for drug discovery are "What does it do to cells - and to animals? Or to humans?"
Update: Wavefunction, in the comments, mentions this famous article on the subject, which I was thinking of in the first paragraph after the quote above, but couldn't put my finger on. Thanks!
I have affection for some reagents, and have taken a dislike to others. That might be seen as odd, because if there's anything that can't return your feelings, it's a chemical reagent. But after some years in the lab, you associate some compounds (and some reactions) with good events, and others with spectacularly bad ones, so it's a natural response.
Today, for example, I'm breaking out some potassium hexamethyldisilazide, known in the trade (for obvious reasons) as K-HMDS. I'm in need of a strong base, and this one has worked for me in a couple of tight spots over the years, which makes me very friendly towards it. The first of those was back in grad school. It was, in retrospect, one of the first times I ever figured out what was going wrong with a reaction from first principles. Knowledge being power and all that, I was then able to come up with a fix, switching my base away from the lithium reagents I'd been using to KHMDS. I can still remember looking at the TLC plate in disbelief, having suddenly seen the yield go from flat zero to over 90%. I'll always be loyal after an experience like that.
There are others. As I've mentioned, I'll always love copper sulfate, just because of its color and because it was one of the first chemical reagents I ever owned as a boy. There are a couple of carbohydrate derivatives (such as good ol' "diacetone glucose") that, unlike some of their cousins, always treated me well during my PhD work, and I'm happy to see them on the rare occasions I have use for them. And as usual with the human brain, there are certain chemical smells that I immediately associate, nostalgically, with old labs. I'm not even sure what some of these are, but they're immediately recognizable, and my first thought is "Now that's chemistry".
But there's a flip side. There are reagents that have done nothing but waste my time and chew up my starting materials, and it's hard for me to warm up to them after that. I'm not sure if anyone likes trimethyl phosphite - it has a smell that seems as if would work its way through a concrete block - but I spent too much time trying to use it (unsuccessfully) for a tricky way out of a problem back in grad school, and I now associate its odor with frustration. I can tell that it's not just that it has a bad odor in general - ethyl vinyl ether is nobody's cologne, either, but that one makes me think of the summer of 1984 and bunch of Claisen rearrangements I was running, and I don't mind that at all. Mercuric oxide is colorful, so you'd think I might like it, but aside from it being toxic, I had some painful experiences with it in some old desulfurization reactions, and it'll never recover with me. And the so-called "higher-order" cuprates, made with copper cyanide - I'm not sure if anyone uses those any more, but I swore years ago to never touch one of those evil things again, and I've stuck to that.
My lists aren't always that absolute. As mentioned here, I went through a period where I absolutely could not take tosyl chloride, but not having to work with kilos of the stuff has gradually allowed it to move back into what's at least neutral territory. For me, that reagent is like running into someone from your old school that you didn't always care for at the time, but with whom you now seem to have at least some common ground in which to share memories.
So my shelves are full of friends and enemies. And now I'm off to see if my old pal, KHMDS, can come through for me again!
And since that last post was about sirtuins, here's a new paper in press at J. Med. Chem. from the Sirtris folks (or the Sirtris folks that were, depending on who's making the move down to PA). They report a number of potent new sirtuin inhibitor compounds, which certainly do look drug-like, and there are several X-ray structures of them bound to SIRT3. It seems that they're mostly SIRT1/2/3 pan-inhibitors; if they have selective compounds, they're not publishing on them yet.
I should also note, after this morning's post, that the activities of these compounds were characterized by a modified mass spec assay! I would expect sirtuin researchers in other labs to gladly take up some of these compounds for their own uses. . .
Note: I should make it clear that these are more compounds produced via the DNA-encoded library technology. Note that these are yet another chemotype from this work.
You know, mass spectrometry has been gradually taking over the world. Well, maybe not your world, but mine (and that of a lot of biopharma/biophysical researchers). There are just so many things that you can do with modern instrumentation that the assays and techniques just keep on coming.
This paper from a recent Angewandte Chemie is a good example. They're looking at post-translational modifications of proteins, which has always been a big field, and shows no signs of getting any smaller. The specific example here is SIRT1, an old friend to readers of this site, and the MALDI-based assay reported is a nice alternative to the fluorescence-based assays in that area, which have (notoriously) been shown to cause artifacts. The mass spec can directly detect deacetylation of a 16-mer histone H4 peptide - no labels needed.
The authors then screened a library of about 5500 natural product compounds (5 compounds per well in 384-well plates). As they showed, though, the hit rates observed would support higher pool numbers, and they successfully tested mixtures of up to 30 compounds at a time. Several structures were found to be micromolar inhibitors of the deacetylation reaction. None of these look very interesting or important per se, although some of them may find use as tool compounds. But the levels of detection and the throughput make me think that this might be a very useful technique for screening a fragment library.
Interestingly, they were also able to run the assay in the other direction, looking at acetylation of the histone protein, and discovered a new inhibitor of that process as well. These results prompted the authors to speculate that their assay conditions would be useful for a whole range of protein-modifying targets, and they may well be right.
So if this is such a good idea, why hasn't it been done before? The answer is that it has, especially if you go beyond the "open literature" and into the patents. Here, for example, is a 2009 application from Sirtris (who else?) on deacetylation/acetylation mass spec assays. And here's a paper (PDF) from 2009 (also in Angewandte) that used shorter peptides (6-mers) to profile enzymes of this type as well. There are many other assays of this sort that have been reported, or worked out inside various biopharma companies for their own uses. But this latest paper serves to show people (or remind them) that you can do such things on realistic substrates, with good reproducibility and throughput, and without having to think for a moment about coupled assays, scintillation plates, fluorescence windows, tagged proteins, and all the other typical details. Other things being equal, the more label-free your assay conditions, the better off you are. And other things are getting closer equal all the time.
Yep, these all tie together. Have a look at this post at Retraction Watch for the details. It's about Colin Purrington, who has a web site on designing posters for conferences. I hadn't seen it before, but it's attained quite a bit of popularity (as it should; it seems to be full of sound advice). Purrington himself has put a lot of work into it, and has decided to protect his copyright.
That means that you have to police these things. I do a little of that myself, when I come across cheapo content-scraping blog sites that are just ripping off my posts, one after the other. What's silly about that is that I almost always grant permission to reprint things if someone goes to the trouble of asking. Colin Purrington seems to have had his hands full with people helping themselves to his work, and the latest example was from the Consortium for Plant Biotechnology Research. He sent them a please-take-this-down notice, and his notices apparently lean towards the colorful. It included a request for the head, on a platter, of whoever it was that decided to rip him off without attribution. He did offer to pay for shipping.
That didn't go over too well. He's received one of those the-sky-shall-fall-upon-you letters from CPBR's expensive lawyers, quoting copyright law to him and accusing him of taking his information from them. (There are archives of Purrington's material going back to 1997, so that should be fun to dispose of). And he was also informed that the staff took his head/platter request as a physical threat, worth contacting authorities about if repeated.
I'm sure there will be more to this story. But so far, I think that we can conclude that no matter how expensive your legal counsel, you're going to have to pay them even more if you expect them to exhibit a sense of humor.
I wanted to mention another paper from Nature Chemical Biology's recent special issue, this one on the best ways to run phenotypic screens. This area has been making a comeback in recent years (as discussed around here before), so articles like this are very useful to help people get up to speed - similar to that article on fragment-based screening pitfalls I mentioned last week.
The author, Ulrike Eggert at King's College (London), has this observation about the cultural factors at work here:
Although my degrees are in chemistry, I have worked mostly in academic biology environments and have been immersed in the cultures of both disciplines. In my experience, biologists are very happy to use chemical tools if they are available (even if they are not optimal) but are less enthusiastic about engaging in small-molecule discovery. One of the reasons for this is that the academic funding culture in biology has focused almost entirely on hypothesis-driven research and has traditionally been dismissive of screening programs, which were considered to be nonintellectual fishing expeditions. With a growing appreciation for the value of interdisciplinary science and the serious need for new tools and approaches, this culture is slowly changing. Another reason is that some early phenotypic screens were perceived to have been only partial successes, resulting in 'low-quality' (for example, low-potency and nonselective) chemical probes.
These observations are right on target. The reaction of some academic biologists to screening programs reminds me of the reaction of some chemists to the "reaction discovery" schemes that have emerged in recent years: "Well, if you're just going to stagger around in circles until you trip over something, then sure. . ." But this, to me, just means that you should be careful to set up your discovery programs in the right places. One of my favorite quotes comes from Francis Crick, talking about the discovery of the double helix structure: "It's true that by blundering about we stumbled on gold, but the fact remains that we were looking for gold."
Eggert goes on to lay out the basic principles for success in this field. First, you'd better have clear, well-defined phenotypes as your readout, or you're sunk right from the start. Cell death is a pretty poor choice, for example, given the number of ways that you can kill a cell, and unfortunately, the same goes for inhibiting proliferation of cancer cells in vtiro. There really are an awful lot of compounds that will do that, in one cell line or another, and most of them are of no use at all. It's important to remember, though, that "well-defined" doesn't mean setting the boundaries so tight that you'll miss something interesting and unusual if it shows up - what it means is understanding your system well enough so that you'll recognize something unusual if it happens.
Assay design is, of course, critical. What's your signal-to-noise? How high is the throughput? How good are the positive and negative controls? What are the secondary assays that could be used to characterize your hits? And the point is also emphasized that the usual problem in these systems is not that you don't get any hits, but that you get so many that following them up is a problem all by itself. You're probably not going to find some compound that just lights up the assay perfectly all by itself - the more typical situation is a whole pile of different-looking things that might have worked, sort of. Sorting those out is a painful but essential part of the screen.
I'm a fan of phenotypic screening, personally, mainly because I don't think that we're smart enough to always realize what it is we're looking for, or exactly how to find it. But done suboptimally, this sort of screen is capable of wasting more time and effort than almost any other method. Eggert's article (and the references in it) are essential reading for anyone trying to get into the field. Medicinal chemists who find themselves working in this area for the first time should make sure to get caught up on these issues, because good med-chem followup is essential to any successful phenotypic campaign, and you want to make sure (as usual) that you're marching under the right flag.
Sunday isn't usually a big day for announcements from big pharma companies. But yesterday is when Bristol-Myers Squibb let everyone know that their CSO, Elliott Sigal, is retiring. I wonder when he found out - Saturday night? More from FierceBiotech here.
Over the years, I've probably had more hits on my "Sand Won't Save You This Time" post than on any other single one on the site. That details the fun you can have with chloride trifluoride, and believe me, it continues (along with its neighbor, bromine trifluoride) to be on the "Things I Won't Work With" list. The only time I see either of them in the synthetic chemistry literature is when a paper by Shlomo Rozen pops up (for example), but despite his efforts on its behalf, I still won't touch the stuff.
And if anyone needs any more proof as to why, I present this video, made at some point by some French lunatics. You may observe the mild reactivity of this gentle substance as it encounters various common laboratory materials, and draw your own conclusions. We have Plexiglas, a rubber glove, clean leather, not-so-clean leather, a gas mask, a piece of wood, and a wet glove. Some of this, under ordinary circumstances, might be considered protective equipment. But not here.
The reaction discovery field continues to increase its throughput, on ever-smaller amounts of material. (That link has several previous discussions here imbedded in it). The latest report uses laser-assisted mass spec to analyze aliquots (less than a microliter each) of 696 different reactions and controls, pulled directly from the 96-well plates with no purification. That took the MALDI-TOF machine about two hours, in case you're wondering - setting up the experiments definitely took a lot longer (!)
The key to getting this to work was having a pyrene moeity attached to the back end of the substrate(s) for reaction discovery. This serves as a mass spec label - it ionizes very efficiently under the laser conditions, and allows excellent signal/noise coming out of all the other reaction gunk that might be in there. You can monitor the disappearance of the starting material and/or the appearance of new products, as you wish. In this case, the test bed was an electron-rich alkyne starting material, exposed to a variety of reacting partners and various metal catalysts. The screen picked up two previously unknown annulations, which were then optimized in a second round of experiments.
I continue to think that this sort of work has the potential to remake synthetic chemistry. Whenever there's some potential for new reactions to be found (and metal-catalyzed systems are a prime example) these techniques will let us survey the landscape much more quickly. There's no reason to think that we've managed to find even a good fraction of the useful chemistry out there.
I feel as if there should be some good news around here on the hiring front, so when any becomes available I want to try to mention it. So here's some: Regeneron has announced today that they're expanding their site in Westchester (NY), adding another 300,000 square feet of lab and office space, and adding over 400 new jobs in a number of areas.
The fusion protein Eylea (aflibercept) has been doing very well for them since its approval in 2011. And they're very much in the hunt for PCSK9 therapies, which could provide a completely new LDL-lowering mechanism. (Here's some good background from John LaMattina on that - Sanofi and Regeneron are running one of those humungous cardiovascular Phase III trials as we speak, and the results of it (compared to the statin standard of care) are going to be extremely interesting). If those numbers come out well, Regeneron could be looking for even more room.
According to FierceBiotech, Amylin's La Jolla site is to be shut down. People have been getting let go from there for months now, ever since BMS bought them, but now everything must go. That's not what the San Diego region needs - another big closure - but here goes.
For those of you interested in fragment screening (and especially for those who are thinking of trying it out), Ben Davis of Vernalis and Dan Erlanson of Carmot Therapeutics have written an excellent guide to avoiding the common experimental problems. (Remarkably, at least to me, Elsevier has made this an open-access article). In fact, I'd recommend the article to everyone doing early-stage compound discovery, whether they're into fragments or not, because many of the issues it raises are universal.
. . .Clearly the approach works, but that is not to say it is easy. This Digest focuses on an area we believe is still insufficiently appreciated: the myriad pitfalls and artifacts that can befall a fragment-screening program. For the sake of brevity, we have chosen to focus on the problems that can hinder or derail an experimental fragment screening campaign; a full discussion of issues around fragment library design, virtual fragment screening, and fragment evolution is best dealt with elsewhere. . .
. . .Today many techniques are used to identify fragments, each with its own strengths. Importantly, however, each of these techniques also has unique limitations. While expert users are generally aware of these and readily pick out the signal from the noise, newcomers are often deceived by spurious signals. This can lead to resources wasted following up on artifacts. In the worst cases—unfortunately all too common—researchers may never realize that they have been chasing false positives, and publish their results. At best, this is an embarrassment, with the researchers sometimes none the wiser. At worst it can cause other research groups to waste their own resources. . .
They go into detail on difficulties with compound identity and stability (on storage and under the assay conditions). You've got your aggregators, your photoactive compounds, your redox cyclers, your hydrolytically unstable ones, etc., all of which can lead to your useless assay results and your wasted time. Then there's a discussion of the limits of each of the popular biophysical screening techniques (and they all have some), emphasizing that if you're going to do fragment screening, that you'd better be prepared to do more than one of these in every campaign. (They are, of course, quite right about this - I've seen the same sorts of situations that they report, where different assays yield different hit sets, and it's up to you to sort out which of those are real).
Highly recommended, as I say. These guys really know what they're talking about, and the drug discovery literature would be greatly improved if everyone were as well-informed.
If you're looking for a sunny, optimistic take on AstraZeneca's move to Cambridge in the UK, the Telegraph has it for you right here. It's a rousing, bullish take on the whole Cambridge scene, but as John Carroll points out at FierceBiotech, it does leave out a few things about AZ. First, though, the froth:
George Freeman MP. . . the Coalition's adviser on life sciences, and Dr Andy Richards, boss of the Cambridge Angels, who has funded at least 20 of the city's start–ups, are among its champions.
"The big pharmaceutical model is dead, we have to help the big companies reinvent themselves," said Freeman. "Cambridge is leading the way on how do this, on research and innovation."
The pair are convinced that the burgeoning "Silicon Fen" is rapidly becoming the global centre of pharma, biotech, and now IT too. Richards says the worlds of bioscience and IT are "crashing together" and revolutionising companies and consumers. Tapping his mobile phone, he says: "This isn't just a phone, it could hold all sorts of medical information, too, on your agility and reactions. This rapid development is what it's all about."
. . .St John's College set up another park where Autonomy started and more than 50 companies are now based. As we pass, on cue a red Ferrari zooms out. "We didn't see Ferraris when I was a boy," says Freeman. "Just old academics on their bikes."
He adds: "That's the great thing about tech, you can suddenly get it, make it commercial and you've got £200m. You don't have to spend four generations of a German family building Mittelstand."
I don't doubt that Cambridge is doing well. There are a lot of very good people in the area, and some very good ideas and companies. But I do doubt that Cambridge is becoming the global hub of pharma, biotech, and IT all at the same time. And that "crashing together" stuff is the kind of vague rah-rah that politicians and developers can spew out on cue. It sounds very exciting until you start asking for details. And it's not like they haven't heard that sort of thing before in Britain. Doesn't anyone remember the "white heat" of the new technological revolution of the 1960s?
But the future of Cambridge and the future of AstraZeneca may be two different things. Specifically, Pascal Soirot of AZ is quoted in the Telegraph piece as saying that "We've lost some of our scientific confidence," and that the company is hoping to get it back by moving to the area. Let's take a little time to think about that statement, because the closer you look at it, the stranger it is. It assumes that (A) there is such a thing as "scientific confidence", and (B) that it can be said to apply to an entire company, and (C) that a loss of it is what ails AstraZeneca, and (D) that one can retrieve it by moving the whole R&D site to a hot site.
Now, assumption (A) seems to me to be the most tenable of the bunch. I've written about that very topic here. It seems clear to me that people who make big discoveries have to be willing to take risks, to look like fools if they're wrong, and to plunge ahead through their own doubts and those of others. That takes confidence, sometimes so much that it rubs other people the wrong way.
But do these traits apply to entire organizations? That's assumption (B), and there things get fuzzy. There do seem to be differences in how much risk various drug discovery shops are willing to take on, but changing a company's culture has been the subject of so many, many management books that it's clearly not something that anyone knows how to do well. The situation is complicated by the disconnects between the public statements of higher executives about the spirits and cultures of their companies, versus the evidence on the ground. In fact, the more time the higher-ups spend talking about how incredibly entrepreneurial and focused everyone at the place is, the more you should worry. If everyone's really busy discovering things, you don't have time to wave t