About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
April 14, 2014
This will be a long one. I'm going to take another look at the Science paper that stirred up so much comment here on Friday. In that post, my first objection (but certainly not my only one) was the chemical structures shown in the paper's Figure 2. A number of them are basically impossible, and I just could not imagine how this got through any sort of refereeing process. There is, for example, a cyclohexadien-one structure, shown at left, and that one just doesn't exist as such - it's phenol, and those equilibrium arrows, though very imbalanced, are still not drawn to scale.
Well, that problem is solved by those structures being intended as fragments, substructures of other molecules. But I'm still positive that no organic chemist was involved in putting that figure together, or in reviewing it, because the reason that I was confused (and many other chemists were as well) is that no one who knows organic chemistry draws substructures like this. What you want to do is put dashed bonds in there, or R groups, as shown. That does two things: it shows that you're talking about a whole class of compounds, not just the structure shown, and it also shows where things are substituted. Now, on that cyclohexadienone, there's not much doubt where it's substituted, once you realize that someone actually intended it to be a fragment. It can't exist unless that carbon is tied up, either with two R groups (as shown), or with an exo-alkene, in which case you have a class of compounds called quinone methides. We'll return to those in a bit, but first, another word about substructures and R groups.
Figure 2 also has many structures in it where the fragment structure, as drawn, is a perfectly reasonable molecule (unlike the example above). Tetrahydrofuran and imidazole appear, and there's certainly nothing wrong with either of those. But if you're going to refer to those as common fragments, leading to common effects, you have to specify where they're substituted, because that can make a world of difference. If you still want to say that they can be substituted at different points, then you can draw a THF, for example, with a "floating" R group as shown at left. That's OK, and anyone who knows organic chemistry will understand what you mean by it. If you just draw THF, though, then an organic chemist will understand that to mean just plain old THF, and thus the misunderstanding.
If the problems with this paper ended at the level of structure drawing, which many people will no doubt see as just a minor aesthetic point, then I'd be apologizing right now. Update: although it is irritating. On Twitter, I just saw that someone spotted "dihydrophyranone" on this figure, which someone figured was close enough to "dihydropyranone", I guess, and anyway, it's just chemistry. But they don't. It struck me when I first saw this work that sloppiness in organic chemistry might be symptomatic of deeper trouble, and I think that's the case. The problems just keep on coming. Let's start with those THF and imidazole rings. They're in Figure 2 because they're supposed to be substructures that lead to some consistent pathway activity in the paper's huge (and impressive) yeast screening effort. But what we're talking about is a pharmacophore, to use a term from medicinal chemistry, and just "imidazole" by itself is too small a structure, from a library of 3200 compounds, to be a likely pharmacophore. Particularly when you're not even specifying where it's substituted and how. There are all kinds of imidazole out there, and they do all kinds of things.
So just how many imidazoles are in the library, and how many caused this particular signature? I think I've found them all. Shown at left are the four imidazoles (and there are only four) that exhibit the activity shown in Figure 2 (ergosterol depletion / effects on membrane). Note that all four of them are known antifungals - which makes sense, given that the compounds were chosen for the their ability to inhibit the growth of yeast, and topical antifungals will indeed do that for you. And that phenotype is exactly what you'd expect from miconazole, et al., because that's their known mechanism of action: they mess up the synthesis of ergosterol, which is an essential part of the fungal cell membrane. It would be quite worrisome if these compounds didn't show up under that heading. (Note that miconazole is on the list twice).
But note that there are nine other imidazoles that don't have that same response signature at all - and I didn't even count the benzimidazoles, and there are many, although from that structure in Figure 2, who's to say that they shouldn't be included? What I'm saying here is that imidazole by itself is not enough. A majority of the imidazoles in this screen actually don't get binned this way. You shouldn't look at a compound's structure, see that it has an imidazole, and then decide by looking at Figure 2 that it's therefore probably going to deplete ergosterol and lead to membrane effects. (Keep in mind that those membrane effects probably aren't going to show up in mammalian cells, anyway, since we don't use ergosterol that way).
There are other imidazole-containing antifungals on the list that are not marked down for "ergosterol depletion / effects on membrane". Ketonconazole is SGTC_217 and 1066, and one of those runs gets this designation, while the other one gets signature 118. Both bifonazole and sertaconazole also inhibit the production of ergosterol - although, to be fair, bifonazole does it by a different mechanism. It gets annotated as Response Signature 19, one of the minor ones, while sertaconazole gets marked down for "plasma membrane distress". That's OK, though, because it's known to have a direct effect on fungal membranes separate from its ergosterol-depleting one, so it's believable that it ends up in a different category. But there are plenty of other antifungals on this list, some containing imidazoles and some containing triazoles, whose mechanism of action is also known to be ergosterol depletion. Fluconazole, for example, is SGTC_227, 1787 and 1788, and that's how it works. But its signature is listed as "Iron homeostasis" once and "azole and statin" twice. Itraconzole is SGTC_1076, and it's also annotated as Response Signature 19. Voriconazole is SGTC_1084, and it's down as "azole and statin". Climbazole is SGTC_2777, and it's marked as "iron homeostasis" as well. This scattering of known drugs between different categories is possibly and indicator of this screen's ability to differentiate them, or possibly an indicator of its inherent limitations.
Now we get to another big problem, the imidazolium at the bottom of Figure 2. It is, as I said on Friday, completely nuts to assign a protonated imidazole to a different category than a nonprotonated one. Note that several of the imidazole-containing compounds mentioned above are already protonated salts - they, in fact, fit the imidazolium structure drawn, rather than the imidazole one that they're assigned to. This mistake alone makes Figure 2 very problematic indeed. If the paper was, in fact, talking about protonated imidazoles (which, again, is what the authors have drawn) it would be enough to immediately call into question the whole thing, because a protonated imidazole is the same as a regular imidazole when you put it into a buffered system. In fact, if you go through the list, you find that what they're actually talking about are N-alkylimidazoliums, so the structure at the bottom of FIgure 2 is wrong, and misleading. There are two compounds on the list with this signature, in case you were wondering, but the annotation may well be accurate, because some long-chain alkylimidazolium compounds (such as ionic liquid components) are already known to cause mitochondrial depolarization.
But there are several other alkylimidazolium compounds in the set (which is a bit odd, since they're not exactly drug-like). And they're not assigned to the mitochondrial distress phenotype, as Figure 2 would have you think. SGTC_1247, 179, 193, 1991, 327, and 547 all have this moeity, and they scatter between several other categories. Once again, a majority of compounds with the Figure 2 substructure don't actually map to the phenotype shown (while plenty of other structural types do). What use, exactly, is Figure 2 supposed to be?
Let's turn to some other structures in it. The impossible/implausible ones, as mentioned above, turn out to be that way because they're supposed to have substituents on them. But look around - adamantane is on there. To put it as kindly as possible, adamantane itself is not much of a pharmacophore, having nothing going for it but an odd size and shape for grease. Tetrahydrofuran (THF) is on there, too, and similar objections apply. When attempts have been made to rank the sorts of functional groups that are likely to interact with protein binding sites, ethers always come out poorly. THF by itself is not some sort of key structural unit; highlighting it as one here is, for a medicinal chemist, distinctly weird.
What's also weird is when I search for THF-containing compounds that show this activity signature, I can't find much. The only things with a THF ring in them seem to be SGTC_2563 (the complex natural product tomatine) and SGTC_3239, and neither one of them is marked with the signature shown. There are some imbedded THF rings as in the other structural fragments shown (the succinimide-derived Diels-Alder ones), but no other THFs - and as mentioned, it's truly unlikely that the ether is the key thing about these compounds, anyway. If anyone finds another THF compound annotated for tubulin folding, I'll correct this post immediately, but for now, I can't seem to track one down, even though Table S4 says that there are 65 of them. Again, what exactly is Figure 2 supposed to be telling anyone?
Now we come to some even larger concerns. The supplementary material for the paper says that 95% of the compounds on the list are "drug-like" and were filtered by the commercial suppliers to eliminate reactive compounds. They do caution that different people have different cutoffs for this sort of thing, and boy, do they ever. There are many, many compounds in this collection that I would not have bothered putting into a cell assay, for fear of hitting too many things and generating uninterpretable data. Quinone methides are a good example - as mentioned before, they're in this set. Rhodanines and similar scaffolds are well represented, and are well known to hit all over the place. Some of these things are tested at hundreds of micromolar.
I recognize that one aim of a study like this is to stress the cells by any means necessary and see what happens, but even with that in mind, I think fewer nasty compounds could have been used, and might have given cleaner data. The curves seen in the supplementary data are often, well, ugly. See the comments section from the Friday post on that, but I would be wary of interpreting many of them myself.
There's another problem with these compounds, which might very well have also led to the nastiness of the assay curves. As mentioned on Friday, how can anyone expect many of these compounds to actually be soluble at the levels shown? I've shown a selection of them here; I could go on. I just don't see any way that these compounds can be realistically assayed at these levels. Visual inspection of the wells would surely show cloudy gunk all over the place. Again, how are such assays to be interpreted?
And one final point, although it's a big one. Compound purity. Anyone who's ever ordered three thousand compounds from commercial and public collections will know, will be absolutely certain that they will not all be what they say on the label. There will be many colors and consistencies, and LC/MS checks will show many peaks for some of these. There's no way around it; that's how it is when you buy compounds. I can find no evidence in the paper or its supplementary files that any compound purity assays were undertaken at any point. This is not just bad procedure; this is something that would have caused me to reject the paper all by itself had I refereed it. This is yet another sign that no one who's used to dealing with medicinal chemistry worked on this project. No one with any experience would just bung in three thousand compounds like this and report the results as if they're all real. The hits in an assay like this, by the way, are likely to be enriched in crap, making this more of an issue than ever.
Damn it, I hate to be so hard on so many people who did so much work. But wasn't there a chemist anywhere in the room at any point?
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | The Scientific Literature
March 28, 2014
Huntington's is a terrible disease. It's the perfect example of how genomics can only take you so far. We've known since 1993 what the gene is that's mutated in the disease, and we know the protein that it codes for (Huntingtin). We even know what seems to be wrong with the protein - it has a repeating chain of glutamines on one end. If your tail of glutamines is less than about 35 repeats, then you're not going to get the disease. If you have 36 to 39 repeats, you are in trouble, and may very well come down with the less severe end of Huntington's. If there are 40 or more, doubt is tragically removed.
So we can tell, with great precision, if someone is going to come down with Huntington's, but we can't do a damn thing about it. That's because despite a great deal of work, we don't really understand the molecular mechanism at work. This mutated gene codes for this defective protein, but we don't know what it is about that protein that causes particular regions of the brain to deteriorate. No one knows what all of Huntingtin's functions are, and not for lack of trying, and multiple attempts to map out its interactions (and determine how they're altered by a too-long N-terminal glutamine tail) have not given a definite answer.
But maybe, as of this week, that's changed. Solomon Snyder's group at Johns Hopkins has a paper out in Nature that suggests an actual mechanism. They believe that mutant Huntingtin binds (inappropriately) a transcription factor called "specificity protein 1", which is known to be a major player in neurons. Among other things, it's responsible for initiating transcription of the gene for an enzyme called cystathionine γ-lyase. That, in turn, is responsible for the last step in cysteine biosynthesis, and put together, all this suggests a brain-specific depletion of cysteine. Update: this could have numerous downstream consequences - this is the pathway that produces hydrogen sulfide, which the Snyder group has shown is an important neurotransmitter (one of several they've discovered), and it's also involved in synthesizing glutathione. Cysteine itself is, of course, often a crucial amino acid in many protein structures as well.)
Snyder is proposing this as the actual mechanism of Huntington's, and they have shown, in human tissue culture and in mouse models of the disease, that supplementation with extra cysteine can stop or reverse the cellular signs of the disease. This is a very plausible theory (it seems to me), and the paper makes a very strong case for it. It should lead to immediate consequences in the clinic, and in the labs researching possible therapies for the disease. And one hopes that it will lead to immediate consequences for Huntington's patients themselves. If I knew someone with the Huntingtin mutation, I believe that I would tell them to waste no time taking cysteine supplements, in the hopes that some of it will reach the brain.
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
March 27, 2014
Here's another target validation initiative, with GSK, the EMBL, and the Sanger Institute joining forces. It's the Centre for Therapeutic Target Validation (CCTV):
CTTV scientists will combine their expertise to explore and interpret large volumes of data from genomics, proteomics, chemistry and disease biology. The new approach will complement existing methods of target validation, including analysis of published research on known biological processes, preclinical animal modelling and studying disease epidemiology. . .
This new collaboration draws on the diverse, specialised skills from scientific institutes and the pharmaceutical industry. Scientists from the Wellcome Trust Sanger Institute will contribute their unique understanding of the role of genetics in health and disease and EMBL-EBI, a global leader in the analysis and dissemination of biological data, will provide bioinformatics-led insights on the data and use its capabilities to integrate huge streams of different varieties of experimental data. GSK will contribute expertise in disease biology, translational medicine and drug discovery.
That's about as much detail as one could expect for now. It's hard to tell what sorts of targets they'll be working on, and by "what sorts" I mean what disease areas, what stage of knowledge, what provenance, and everything else. But the press release goes on to say that the information gathered by this effort will be open to the rest of the scientific community, which I applaud, and that should give us a chance to look under the hood a bit.
It's hard for me to say anything bad about such an effort, other than wishing it done on a larger scale. I was about to say "other than wishing it ten times larger", but I think I'd rather have nine other independent efforts set up than making this one huge, for several reasons. Quis validet ipsos validares, if that's a Latin verb and I haven't mangled it: Who will validate the validators? There's enough trickiness and uncertainty in this stuff for plenty more people to join in.
+ TrackBacks (0) | Category: Biological News | Drug Assays
March 24, 2014
Some of you may remember the "Google Flu" effort, where the company was going to try to track outbreaks of influenza in the US by mining Google queries. There was never much clarification about what terms, exactly, they were going to flag as being indicative of someone coming down with the flu, but the hype (or hope) at the time was pretty strong:
Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. . .
So how'd that work out? Not so well. Despite a 2011 paper that seemed to suggest things were going well, the 2013 epidemic wrong-footed the Google Flu Trends (GFT) algorithms pretty thoroughly.
This article in Science finds that the real-world predictive power has been pretty unimpressive. And the reasons behind this failure are not hard to understand, nor were they hard to predict. Anyone who's ever worked with clinical trial data will see this one coming:
The initial version of GFT was a particularly problematic marriage of big and small data. Essentially, the methodology was to find the best matches among 50 million search terms to fit 1152 data points. The odds of finding search terms that match the propensity of the flu but are structurally unrelated, and so do not predict the future, were quite high. GFT developers, in fact, report weeding out seasonal search terms unrelated to the flu but strongly correlated to the CDC data, such as those regarding high school basketball. This should have been a warning that the big data were overfitting the small number of cases—a standard concern in data analysis. This ad hoc method of throwing out peculiar search terms failed when GFT completely missed the nonseasonal 2009 influenza A–H1N1 pandemic.
The Science authors have a larger point to make as well:
“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis. Elsewhere, we have asserted that there are enormous scientific possibilities in big data. However, quantity of data does not mean that one can ignore foundational issues of measurement and construct validity and reliability and dependencies among data. The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.
The quality of the data matters very, very, much, and quantity is no substitute. You can make a very large and complex structure out of toothpicks and scraps of wood, because those units are well-defined and solid. You cannot do the same with a pile of cotton balls and dryer lint, not even if you have an entire warehouse full of the stuff. If the individual data points are squishy, adding more of them will not fix your analysis problem; it will make it worse.
Since 2011, GFT has missed (almost invariably on the high side) for 108 out of 111 weeks. As the authors show, even low-tech extrapolation from three-week-lagging CDC data would have done a better job. But then, the CDC data are a lot closer to being real numbers. Something to think about next time someone's trying to sell you on a BIg Data project. Only trust the big data when the little data are trustworthy in turn.
Update: a glass-half-full response in the comments.
+ TrackBacks (0) | Category: Biological News | Clinical Trials | Infectious Diseases
March 20, 2014
This time last year I mentioned a particularly disturbing-looking compound, sold commercially as a so-called "selective inhibitor" of two deubiquitinase enzymes. Now, I have a fairly open mind about chemical structures, but that thing is horrible, and if it's really selective for just those two proteins, then I'm off to truck-driving school just like Mom always wanted.
Here's an enlightening look through the literature at this whole class of compound, which has appeared again and again. The trail seems to go back to this 2001 paper in Biochemistry. By 2003, you see similar motifs showing up as putative anticancer agents in cell assays, and in 2006 the scaffold above makes its appearance in all its terrible glory.
The problem is, as Jonathan Baell points out in that HTSpains.com post, that this series has apparently never really had a proper look at its SAR, or at its selectivity. It wanders through a series of publications full of on-again off-again cellular readouts, with a few tenuous conclusions drawn about its structure - and those are discarded or forgotten by the time the next paper comes around. As Baell puts it:
The dispiriting thing is that with or without critical analysis, this compound is almost certainly likely to end up with vendors as a “useful tool”, as they all do. Further, there will be dozens if not hundreds of papers out there where entirely analogous critical analyses of paper trails are possible.
The bottom line: people still don’t realize how easy it is to get a biological readout. The more subversive a compound, the more likely this is. True tools and most interesting compounds usually require a lot more medicinal chemistry and are often left behind or remain undiscovered.
Amen to that. There is way too much of this sort of thing in the med-chem literature already. I'm a big proponent of phenotypic screening, but setting up a good one is harder than setting up a good HTS, and working up the data from one is much harder than working up the data from an in vitro assay. The crazier or more reactive your "hit" seems to be, the more suspicious you should be.
The usual reply to that objection is "Tool compound!" But the standards for a tool compound, one used to investigate new biology and cellular pathways, are higher than usual. How are you going to unravel a biochemical puzzle if you're hitting nine different things, eight of which you're totally unaware of? Or skewing your assay readouts by some other effect entirely? This sort of thing happens all the time.
I can't help but think about such things when I read about a project like this one, where IBM's Watson software is going to be used to look at sequences from glioblastoma patients. That's going to be tough, but I think it's worth a look, and the Watson program seems to be just the correlation-searcher for the job. But the first thing they did was feed in piles of biochemical pathway data from the literature, and the problem is, a not insignificant proportion of that data is wrong. Statements like these are worrisome:
Over time, Watson will develop its own sense of what sources it looks at are consistently reliable. . .if the team decides to, it can start adding the full text of articles and branch out to other information sources. Between the known pathways and the scientific literature, however, IBM seems to think that Watson has a good grip on what typically goes on inside cells.
Maybe Watson can tell the rest of us, then. Because I don't know of anyone actually doing cell biology who feels that way, not if they're being honest with themselves. I wish the New York Genome Center and IBM luck in this, and I still think it's a worthwhile thing to at least try. But my guess is that it's going to be a humbling experience. Even if all the literature were correct in every detail, I think it would be one. And the literature is not correct in every detail. It has compounds like that one at the top of the entry in it, and people seem to think that they can draw conclusions from them.
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays | The Scientific Literature
March 12, 2014
OK, now that recent stem cell report is really in trouble. One of the main authors, Teruhiko Wakayama, is saying that the papers should be withdrawn. Here's NHK:
Wakayama told NHK he is no longer sure the STAP cells were actually created. He was in charge of important experiments to check the pluripotency of the cells.
He said a change in a specific gene is key proof that the cells are created. He said team members were told before they released the papers that the gene had changed.
Last week, RIKEN disclosed detailed procedures for making STAP cells after outside experts failed to replicate the results outlined in the Nature article.
Wakayama pointed out that in the newly released procedures, RIKEN says this change didn't take place.
He said he reviewed test data submitted to the team's internal meetings and found multiple serious problems, such as questionable images.
These are the sorts of things that really should be ironed out before you make a gigantic scientific splash, you'd think. But I can understand how these things happen, too - a big important result, a groundbreaking discovery, and you think that someone else is probably bound to find the same thing within a month. Within a week. So you'd better publish as fast as you can, unless you feel like being a footnote when the history gets written and the prizes get handed out. There are a few details that need to be filled in? That's OK - just i-dotting and t-crossing, that stuff will be OK. The important thing is the get the discovery out to the world.
But that stuff comes back to bite you, big-time. Andrew Wiles was able to fix his proof of Fermat's Last Theorem post-announcement, but (a) that problem was non-obvious (he didn't know it was there at first), and (b) biology ain't math. Cellular systems are flaky, fluky, and dependent on a lot of variables, some of which you might not even be aware of. An amazing result in an area as tricky as stem cell generation needs a lot of shaking down, and it seems that this one has gotten it. Well, it's getting it now.
+ TrackBacks (0) | Category: Biological News
February 27, 2014
Ah, the good old central nervous system, and its good old receptors. Especially the good old ion channels - there's an area with enough tricky details built into it to keep us all busy for another few decades. Here's a good illustration, in a new paper from Nature Chemical Biology. The authors, from Berkeley, are looking at the ionotropic glutamate receptors, an important (and brainbendingly complex) group. These are the NMDA, AMPA, and kainate receptors, if you name them by their prototype ligands, and they're assembled as tetramers from mix-and-match subunit proteins, providing a variety of species even before you start talking about splice variants and the like. This paper used a couple of the simpler kainate systems as a proving ground.
They're working with azobenzene-linked compounds that can be photoisomerized, and using that property as a switch. Engineering a Cys residue close to the binding pocket lets them swivel the compound in and out (as shown), and this gives them a chance to see how many of the four individual subunits need to be occupied, and what the states of the receptor are along the way. (The ligand does nothing when it's not tethered to the protein). The diagram shows the possible occupancy states, and the colored-in version shows what they found for receptor activation.
You apparently need two ligands just to get anything to happen (and this is consistent with previous work on these systems). Three ligands buys you more signaling, and the four peaks things out. Patch-clamp studies had already shown that these things are apparently capable of stepwise signaling, and this work nails that down ingeniously. Presumably this whole tetramer setup has been under selection to take advantage of that property, and you'd have to assume that the NMDA and AMPA receptors (extremely common ones, by the way) are behaving similarly. The diagram shows the whole matrix of what seems to be going on.
+ TrackBacks (0) | Category: Biological News
February 21, 2014
Update: the nomenclature of these enzymes is messy - see the comments.
Here's another activity-based proteomics result that I've been meaning to link to - in this one, the Cravatt group strengthens the case for carboxylesterase 3 as a potential target for metabolic disease. From what I can see, that enzyme was first identified back in about 2004, one of who-knows-how-many others that have similar mechanisms and can hydrolyze who-knows-how-many esters and ester-like substrates. Picking your way through all those things from first principles would be a nightmare - thus the activity-based approach, where you look for interesting phenotypes and work backwards.
In this case, they were measuring adipocyte behavior, specifically differentiation and lipid accumulation. A preliminary screen suggested that there were a lot of serine hydrolase enzymes active in these cells, and a screen with around 150 structurally diverse carbamates gave several showing phenotypic changes. The next step in the process is to figure out what particular enzymes are responsible, which can be done by fluorescence labeling (since the carbamates are making covalent bonds in the enzyme active sites. They found my old friend hormone-sensitive lipase, as well they should, but there was another enzyme that wasn't so easy to identify.
One particular carbamate, the unlovely but useful WWL113, was reasonably selective for the enzyme of interest, which turned out to be the abovementioned carboxyesterase 3 (Ces3). The urea analog (which should be inactive) did indeed show no cellular readouts, and the carbamate itself was checked for other activities (such as whether it was a PPAR ligand). These established a strong connection between the inhibitor, the enzyme, and the phenotypic effects.
With that in hand, they went on to find a nicer-looking compound with even better selectivity, WWL229. (I have to say, going back to my radio-geek days in the 1970s and early 1980s, that I can't see the letters "WWL" without hearing Dixieland jazz, but that's probably not the effect the authors are looking for). Using an alkyne derivative of this compound as a probe, it appeared to label only the esterase of interest across the entire adipocyte proteome. Interestingly, though, it appears that WWL13 was more active in vivo (perhaps due to pharmacokinetic reasons?)
And those in vivo studies in mice showed that Ces3 inhibition had a number of beneficial effects on tissue and blood markers of metabolic syndrome - glucose tolerance, lipid profiles, etc. Histologically, the most striking effect was the clearance of adipose deposits from the liver (a beneficial effect indeed, and one that a number of drug companies are interested in). This recapitulates genetic modification studies in rodents targeting this enzyme, and shows that pharmacological inhibition could do the job. And while I'm willing to bet that the authors would rather have discovered a completely new enzyme target, this is solid work all by itself.
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Diabetes and Obesity
February 18, 2014
Oh, @#$!. That was my first comment when I saw this story. That extraordinary recent work on creating stem cells by subjected normal cells to acid stress is being investigated:
The RIKEN centre in Kobe announced on Friday that it is looking into alleged irregularities in the work of biologist Haruko Obokata, who works at the institution. She shot to fame last month as the lead author on two papers published in Nature that demonstrated a simple way to reprogram mature mice cells into an embryonic state by simply applying stress, such as exposure to acid or physical pressure on cell membranes. The RIKEN investigation follows allegations on blog sites about the use of duplicated images in Obokata’s papers, and numerous failed attempts to replicate her results.
PubPeer gets the credit for bringing some of the problems into the light. There are some real problems with figures in the two papers, as well as earlier ones from the same authors. These might be explicable as cimple mistakes, which is what the authors seem to be claiming, if it weren't for the fact that no one seems to be able to get the stem-cell results to reproduce. There are mitigating factors there, too - different cell lines, perhaps the lack of a truly detailed protocol from the original paper. But a paper should have enough details in it to be reproduced, shouldn't it?
Someone on Twitter was trying to tell me the other day that the whole reproducibility issue was being blown out of proportion. I don't think so. The one thing we seem to be able to reproduce is trouble.
Update: a list of the weirdest things (so far) about this whole business.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
February 14, 2014
Here's a nasty fight going on in molecular biology/bioinformatics. Lior Pachter of Berkeley describes some severe objections he has to published work from the lab of Manolis Kellis at MIT. (His two previous posts on these issues are here and here). I'm going to use a phrase that Pachter hears too often and say that I don't have the math to address those two earlier posts. But the latest one wraps things up in a form that everyone can understand. After describing what does look like a severe error in one of the Manolis group's conference presentations, which Pachter included in a review of the work, he says that:
. . .(they) spun the bad news they had received as “resulting from combinatorial connectivity patterns prevalent in larger network structures.” They then added that “…this combinatorial clustering effect brings into question the current definition of network motif” and proposed that “additional statistics…might well be suited to identify larger meaningful networks.” This is a lot like someone claiming to discover a bacteria whose DNA is arsenic-based and upon being told by others that the “discovery” is incorrect – in fact, that very bacteria seeks out phosphorous – responding that this is “really helpful” and that it “raises lots of new interesting open questions” about how arsenate gets into cells. Chutzpah. When you discover your work is flawed, the correct response is to retract it.
I don’t think people read papers very carefully. . .
He goes on to say:
I have to admit that after the Grochow-Kellis paper I was a bit skeptical of Kellis’ work. Not because of the paper itself (everyone makes mistakes), but because of the way he responded to my review. So a year and a half ago, when Manolis Kellis published a paper in an area I care about and am involved in, I may have had a negative prior. The paper was Luke Ward and Manolis Kellis “Evidence for Abundant and Purifying Selection in Humans for Recently Acquired Regulatory Functions”, Science 337 (2012) . Having been involved with the ENCODE pilot, where I contributed to the multiple alignment sub-project, I was curious what comparative genomics insights the full-scale $130 million dollar project revealed. The press releases accompanying the Ward-Kellis paper (e.g. The Nature of Man, The Economist) were suggesting that Ward and Kellis had figured out what makes a human a human; my curiosity was understandably piqued.
But a closer look at the paper, Pachter says, especially a dig into the supplementary material (always a recommended move) shows that the conclusions of the paper were based on what he terms "blatant statistically invalid cherry picking". See, I told you this was a fight. He also accuses Kellis of several other totally unacceptable actions in his published work, the sorts of things that cannot be brushed off as differences in interpretations or methods. He's talking fraud. And he has a larger point about how something like this might persist in the computational biology field (emphasis added):
Manolis Kellis’ behavior is part of a systemic problem in computational biology. The cross-fertilization of ideas between mathematics, statistics, computer science and biology is both an opportunity and a danger. It is not hard to peddle incoherent math to biologists, many of whom are literally math phobic. For example, a number of responses I’ve received to the Feizi et al. blog post have started with comments such as
“I don’t have the expertise to judge the math, …”
Similarly, it isn’t hard to fool mathematicians into believing biological fables. Many mathematicians throughout the country were recently convinced by Jonathan Rothberg to donate samples of their DNA so that they might find out “what makes them a genius”. Such mathematicians, and their colleagues in computer science and statistics, take at face value statements such as “we have figured out what makes a human human”. In the midst of such confusion, it is easy for an enterprising “computational person” to take advantage of the situation, and Kellis has.
You can peddle incoherent math to medicinal chemists, too, if you feel the urge. We don't use much of it day-to-day, although we've internalized more than we tend to realize. But if someone really wants to sell me on some bogus graph theory or topology, they'll almost certainly be able to manage it. I'd at least give them the benefit of the doubt, because I don't have the expertise to call them on it. Were I so minded, I could probably sell them some pretty shaky organic chemistry and pharmacokinetics.
But I am not so minded. Science is large, and we have to be able to trust each other. I could sit down and get myself up to speed on topology (say), if I had to, but the effort required would probably be better spent doing something else. (I'm not ruling out doing math recreationally, just for work). None of us can simultaneously be experts across all our specialities. So if this really is a case of publishing junk because, hey, who'll catch on, right, then it really needs to be dealt with.
If Pachter is off base, though, then he's in for a rough ride of his own. Looking over his posts, my money's on him and not Kellis, but we'll all have a chance to find out. After this very public calling out, there's no other outcome.
+ TrackBacks (0) | Category: Biological News | In Silico | The Dark Side | The Scientific Literature
February 10, 2014
Here's a very interesting feature from Cell - an interactive timeline on the journal's 40th anniversary, highlighting some of the key papers it's published over the years. This installment takes us up into the early 1980s. When you see the 1979 paper that brings the news that tyrosine groups on proteins actually get phosphorylated post-translation, the 1982 discovery of Ras as involved in human cancer cells, or another 1982 paper showing that telomeres have these weird repeating units on them, you realize how young the sciences molecular and cell biology really are.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
February 7, 2014
Here's something for metabolic disease people to think about: there's a report adding to what we know about the hormone irisin, secreted from muscle tissue, that causes some depots of white adipose tissue to become more like energy-burning brown fat. In the late 1990s, there were efforts all across the drug industry to find beta-3 adrenoceptor agonists to stimulate brown fat for weight loss and dyslipidemia. None of them ever made it through, and thus the arguments about whether they would actually perform as thought were never really settled. One of the points of contention was how much responsive brown adipose tissue adults had available, but I don't recall anything suspecting that it could be induced. In recent years, though, it's become clear that a number of factors can bring on what's been called "beige fat".
Irisin seems to be released in response to exercise, and is just upstream of the important transcriptional regulator PGC-1a. In fact, release of irisin might be the key to a lot of the beneficial effects of exercise, which would be very much worth knowing. In this study, a stabilized version of it, given iv to rodents, had very strong effects on body weight and glucose tolerance, just the sort of thing a lot of people could use.
One of the very interesting features of this area, from a drug discovery standpoint, is that no one has identified the irisin receptor just yet. Look for headlines on that one pretty soon, though - you can bet that a lot of people are chasing it as we speak.
Update: are human missing out on this, compared to mice and other species?
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity
February 5, 2014
You may remember a study that suggested that antioxidant supplement actually negated the effects of exercise in muscle tissue. (The reactive oxygen species generated are apparently being used by the cells as a signaling mechanism, one that you don't necessarily want to turn off). That was followed by another paper that showed that cells that should be undergoing apoptosis (programmed cell death) could be kept alive by antioxidant treatment. Some might read that and not realize what a bad idea that is - having cells that ignore apoptosis signals is believed to be a common feature in carcinogenesis, and it's not something that you want to promote lightly.
Here are two recent publications that back up these conclusions. The BBC reports on this paper from the Journal of Physiology. It looks like a well-run trial demonstrating that antioxidant therapy (Vitamin C and Vitamin E) does indeed keep muscles from showing adaptation to endurance training. The vitamin-supplemented group reached the same performance levels as the placebo group over the 11-week program, but on a cellular level, they did not show the (beneficial) changes in mitochondria, etc. The authors conclude:
Consequently, vitamin C and E supplementation hampered cellular adaptions in the exercised muscles, and although this was not translated to the performance tests applied in this study, we advocate caution when considering antioxidant supplementation combined with endurance exercise.
Then there's this report in The Scientist, covering this paper in Science Translational Medicine. The title says it all: "Antioxidants Accelerate Lung Cancer Progression in Mice". In this case, it looks like reactive oxygen species should normally be activating p53, but taking antioxidants disrupts this signaling and allows early-stage tumor cells (before their p53 mutates) to grow much more quickly.
So in short, James Watson appears to be right when he says that reactive oxygen species are your friends. This is all rather frustrating when you consider the nonstop advertising for antioxidant supplements and foods, especially for any role in preventing cancer. It looks more and more as if high levels of extra antioxidants can actually give people cancer, or at the very least, help along any cancerous cells that might arise on their own. Evidence for this has been piling up for years now from multiple sources, but if you wander through a grocery or drug store, you'd never have the faintest idea that there could be anything wrong with scarfing up all the antioxidants you possibly can.
The supplement industry pounces on far less compelling data to sell its products. But here are clear indications that a large part of their business is actually harmful, and nothing is heard except the distant sound of crickets. Or maybe those are cash registers. Even the wildly credulous Dr. Oz reversed course and did a program last year on the possibility that antioxidant supplements might be doing more harm than good, although he still seems to be pitching "good" ones versus "bad". Every other pronouncement from that show is immediately bannered all over the health food aisles - what happened to this one?
This shouldn't be taken as a recommendation to go out of the way to avoid taking in antioxidants from food. But going out of your way to add lots of extra Vitamin C, Vitamin E, N-acetylcysteine, etc., to your diet? More and more, that really looks like a bad idea.
Update: from the comments, here's a look at human mortality data, strongly suggesting no benefit whatsoever from antioxidant supplementation (and quite possibly harm from beta-carotene, Vitamin A, and Vitamin E),
+ TrackBacks (0) | Category: Biological News | Cancer
February 3, 2014
The advent of such techniques as CRISPR has people thinking again about gene therapy, and no wonder. This has always been a dream of molecular medicine - you could wipe all sorts of rare diseases off the board by going in and fixing their known genetic defects. Actually doing that, though, has been extremely difficult (and dangerous, since patients have died in the attempt).
But here's a report of embryonic gene modification in cynomologous monkeys, and if it works in cynos, it's very likely indeed to work in humans. In vitro fertilization plus CRISPR/Cas9 - neither of these, for better or worse, are all that hard to do, and my guess is that we're very close to seeing someone try this - probably not in the US at first, but there are plenty of other jurisdictions. There's a somewhat disturbing angle, though: I don't see much cause (or humanly acceptable cause) for generating gene-knockout human beings, which is what this technique would most easily provide. And for fixing genetic defects, well, you'd have to know that the single-cell embryo actually has the defect, and unless both parents are homozygous, you're not going to be sure (can't sequence the only cell you have, can you?) So the next easiest thing is to add copies of some gene you find desirable, and that will take us quickly into uneasy territory.
A less disturbing route might be to see if the technique can be used to gene-edit the egg and sperm cells before fertilization. Then you've got the possibility of editing germ cell lines in vivo, which really would wipe these diseases out of humanity (except for random mutations), but that will be another one of those hold-your-breath steps, I'd think. It's only a short step from fixing what's wrong to enhancing what's already there - it all depends on where you slide the scale to define "wrong". More fast-twitch muscle fibers, maybe? Restore the ability to make your own vitamin C? Switch the kid's lipoproteins to ApoA1 Milano?
For a real look into the future, combine this with last week's startling report of the generation of stem cells by applying stress to normal tissue samples. This work seems quite solid, and there are apparently anecdotal reports (see the end of this transcript) of some of it being reproduced already. If so, we would appear to be vaulting into a new world of tissue engineering, or at least a new world of being able to find out what's really hard about tissue engineering. ("Just think - horrible, head-scratching experimental tangles that were previously beyond our reach can finally be. . .")
Now have a look at this news about a startup called Editas. They're not saying what techniques they're going to use (my guess is some proprietary variant of CRISPR). But whatever they have, they're going for the brass ring:
(Editas has) ambitious plans to create an entirely new class of drugs based on what it calls “gene editing.” The idea is similar, yet different, from gene therapy: Editas’ goal is to essentially target disorders caused by a singular genetic defect, and using a proprietary in-house technology, create a drug that can “edit” out the abnormality so that it becomes a normal, functional gene—potentially, in a single treatment. . .
. . .Editas, in theory, could use this system to create a drug that could cure any number of genetic diseases via a one-time fix, and be more flexible than gene therapy or other techniques used to cure a disease on the genetic level. But even so, the challenges, just like gene therapy, are significant. Editas has to figure out a way to safely and effectively deliver a gene-editing drug into the body, something Bitterman acknowledges is one of the big hills the company has to climb.
This is all very exciting stuff. But personally, I don't do gene editing, being an organic chemist and a small-molecule therapeutics guy. So what does all this progress mean for someone like me (or for the companies that employ people like me?) Well, for one thing, it is foretelling the eventual doom of the what we can call the Genzyme model, treating rare metabolic disorders with few patients but high cost-per-patient. A lot of companies are targeting (or trying to target) that space these days, and no wonder. Their business model is still going to be safe for some years, but honestly, I'd have to think that eventually someone is going to get this gene-editing thing to work. You'd have to assume that it will be harder than it looks; most everything is harder than it looks. And regulatory agencies are not going to be at their speediest when it comes to setting up trials for this kind of thing. But a lot of people with a lot of intelligence, a lot of persistence, and an awful lot of money are going after this, and I have to think that someone is going to succeed. Gene editing, Moderna's mRNA work - we're going to rewrite the genome to suit ourselves, and sooner than later. The reward will be treatments that previous eras would have had to ascribe to divine intervention, a huge step forward in Francis Bacon's program of "the effecting of all things possible".
The result will also be a lot of Schumpeterian "creative destruction" as some existing business models dissolve. And that's fine - I think that business models should always be subject to that selection pressure. As a minor side benefit, these therapies might finally (but probably won't) shut up the legion of people who go on about how drug companies aren't interested in cures, just endlessly profitable treatments. It never seems to occur to them that cures are hard, nor that someone might actually come along with one.
+ TrackBacks (0) | Category: Biological News
January 28, 2014
Here's a look at some very interesting research on HIV (and a repurposed compound) that I was unable to comment on here. As for the first line of that post, well, I doubt it, but I like to think of myself as rich in spirit. Or something.
+ TrackBacks (0) | Category: Biological News | Infectious Diseases
January 14, 2014
Here's a good paper on the design of stapled peptides, with an emphasis on what's been learned about making them cell-penetrant. It's also a specific rebuttal to a paper from Genentech (the Okamoto one referenced below) detailing problems with earlier reported stapled peptides:
In order to maximize the potential for success in designing stapled peptides for basic research and therapeutic development, a series of important considerations must be kept in mind to avoid potential pitfalls. For example, Okamoto et al. recently reported in ACS Chemical Biology that a hydrocarbon-stapled BIM BH3 peptide (BIM SAHB) manifests neither improved binding activity nor cellular penetrance compared to an unmodified BIM BH3 peptide and thereby caution that peptide stapling does not necessarily enhance affinity or biological activity. These negative results underscore an important point about peptide stapling: insertion of any one staple at any one position into any one peptide to address any one target provides no guarantee of stapling success. In this particular case, it is also noteworthy that the Walter and Eliza Hall Institute (WEHI) and Genentech co-authors based their conclusions on a construct that we previously reported was weakened by design to accomplish a specialized NMR study of a transient ligand−protein interaction and was not used in cellular studies because of its relatively low α-helicity, weak binding activity, overall negative charge, and diminished cellular penetrance. Thus, the Okamoto et al. report provides an opportunity to reinforce key learnings regarding the design and application of stapled peptides, and the biochemical and biological activities of discrete BIM SAHB peptides.
You may be able to detect the sound of teeth gritting together in that paragraph. The authors (Loren Walensky of Dana-Farber, and colleagues from Dana-Farber, Albert Einstein, Chicago, and Yale), point out that the Genentech paper took a peptide that's about 21% helical, and used a staple modification that took it up to about 39% helical, which they say is not enough to guarantee anything. They also note that when you apply this technique, you're necessarily altering two amino acids at a minimum (to make them "stapleable"), as well as adding a new piece across the surface of the peptide helix, so these changes have to be taken into account when you compare binding profiles. Some binding partners may be unaffected, some may be enhanced, and some may be wiped out.
It's the Genentech team's report of poor cellular uptake that you can tell is the most irritating feature of their paper to these authors, and from the way they make their points, you can see why:
The authors then applied this BIM SAHBA (aa 145−164) construct in cellular studies and observed no biological activity, leading to the conclusion that “BimSAHB is not inherently cell-permeable”. However, before applying stapled peptides in cellular studies, it is very important to directly measure cellular uptake of fluorophore-labeled SAHBs by a series of approaches, including FACS analysis, confocal microscopy, and fluorescence scan of electrophoresed lysates from treated cells, as we previously reported. Indeed, we did not use the BIM SAHBA (aa 145−164) peptide in cellular studies, specifically because it has relatively low α-helicity, weakened binding activity, and overall negative charge (−2), all of which combine to make this particular BIM SAHB construct a poor candidate for probing cellular activity. As indicated in our 2008 Methods in Enzymology review, “anionic species may require sequence modification (e.g., point mutagenesis, sequence shift) to dispense with negative charge”, a strategy that emerged from our earliest studies in 2004 and 2007 to optimize the cellular penetrance of stapled BID BH3 and p53 peptides for cellular and in vivo analyses and also was applied in our 2010 study involving stapled peptides modeled after the MCL-1 BH3 domain. In our 2011 Current Protocols in Chemical Biology article, we emphasized that “based on our evaluation of many series of stapled peptides, we have observed that their propensity to be taken up by cells derives from a combination of factors, including charge, hydrophobicity, and α-helical structure, with negatively charged and less structured constructs typically requiring modification to achieve cell penetrance. . .
They go on to agree with the Genentech group that the peptide they studied has poor uptake into cells, but the tell-us-something-we-don't-know tone comes through pretty clearly, I'd say. The paper goes on to detail several other publications where these authors worked out the behavior of BIM BH3 stapled peptides, saying that "By assembling our published documentation of the explicit sequence compositions of BIM SAHBs and their distinct properties and scientific applications, as also summarized in Figure 1, we hope to resolve any confusion generated by the Okamoto et al. study".
They do note that the Genentech (Okamoto) paper did use one of their optimized peptides in a supplementary experiment, which shows that they were aware of the different possibilities. That one was apparently showed no effects on the viability of mouse fibroblasts, but this new paper says that a closer look (at either their own studies or at the published literature) would have shown them that the cells were actually taking up the peptide, but were relatively resistant to its effects, which actually helps establish something of a therapeutic window.
This is a pretty sharp response, and it'll be interesting to see if the Genentech group has anything to add in their defense. Overall, the impression is that stapled peptides can indeed work, and do have potential as therapeutic agents (and are in the clinic being tested as such), but that they need careful study along the way to make sure of their properties, their pharmacokinetics, and their selectivity. Just as small molecules do, when you get down to it.
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology
January 13, 2014
Here's a paper from a few weeks back that I missed during the holidays: work from the Sinclair labs at Harvard showing a new connection between SIRT1 and aging, this time through a mechanism that no one had appreciated. I'll appreciate, in turn, that that opening sentence is likely to divide its readers into those who will read on and those who will see the words "SIRT1" or "Sinclair" and immediate seek their entertainment elsewhere. I feel for you, but this does look like an interesting paper, and it'll be worthwhile to see what comes of it.
Here's the Harvard press release, which is fairly detailed, in case you don't have access to Cell. The mechanism they're proposing is that as NAD+ levels decline with age, this affects SIRT1 function to the point that it no longer constains HIF-1. Higher levels of HIF-1, in turn, disrupt pathways between the nucleus and the mitochondia, leading to lower levels of mitochondria-derived proteins, impaired energy generation, and cellular signs of aging.
Very interestingly, these effects were reversed (on a cellular/biomarker level) by one-week treatment of aging mice with NMN (nicotine mononucleotide edit: fixed typo), a precursor to NAD. That's kind of a brute-force approach to the problem, but a team from Washington U. recently showed extremely similar effects in aging diabetic rodents supplemented with NMN, done for exactly the same NAD-deficiency reasons. I would guess that the NMN is flying off the shelves down at the supplement stores, although personally I'll wait for some more in vivo work before I start taking it with my orange juice in the mornings.
Now, whatever you think of sirtuins (and of Sinclair's work with them), this work is definitely not crazy talk. Mitochondria function has long been a good place to look for cellular-level aging, and HIF-1 is an interesting connection as well. As many readers will know, that acronym stands for "hypoxia inducible factor" - the protein was originally seen to be upregulated when cells were put under low-oxygen stress. It's a key regulatory switch for a number of metabolic pathways under those conditions, but there's no obvious reason for it to be getting more active just because you're getting older. Some readers may have encountered it as an oncology target - there are a number of tumors that show abnormal HIF activity. That makes sense, on two levels - the interiors of solid tumors are notoriously oxygen-poor, so that would at least be understandable, but switching on HIF under normal conditions is also bad news. It promotes glycolysis as a metabolic pathway, and stimulates growth factors for angiogenesis. Both of those are fine responses for a normal cell that needs more oxygen, but they're also the behavior of a cancer cell showing unrestrained growth. (And those cells have their tradeoffs, too, such as a possible switch between metastasis and angiogenesis, which might also have a role for HIF).
There's long been speculation about a tradeoff between aging and cellular prevention of carcinogenicity. In this case, though, we might have a mechanism where our interests on on the same side: overactive HIF (under non-hypoxic conditions) might be a feature of both cancer cells and "normally" aging ones. I put that word in quotes because (as an arrogant upstart human) I'm not yet prepared to grant that the processes of aging that we undergo are the ones that we have to undergo. My guess is that there's been very little selection pressure on lifespan, and that what we've been dealt is the usual evolutionary hand of cards: it's a system that works well enough to perpetuate the species and beyond that who cares?
Well, we care. Biochemistry is a wonderful, heartbreakingly intricate system whose details we've nowhere near unraveled, and we often mess it up when we try to do anything to it, anyway. But part of what makes us human is the desire (and now the ability) to mess around with things like this when we think we can benefit. Not looking at the mechanisms of aging seems to me like not looking at the mechanisms of, say, diabetes, or like letting yourself die of a bacterial infection when you could take an antibiotic. Just how arrogant that attitude is, I'm not sure yet. I think we'll eventually get the chance to find out. All this recent NAD work suggests that we might get that chance sooner than later. Me, I'm 51. Speed the plow.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Diabetes and Obesity
December 4, 2013
Here's some work that gets right to the heart of modern drug discovery: how are we supposed to deal with the variety of patients we're trying to treat? And the variety in the diseases themselves? And how does that correlate with our models of disease?
This new paper, a collaboration between eight institutions in the US and Europe, is itself a look at two other recent large efforts. One of these, the Cancer Genome Project, tested 138 anticancer drugs against 727 cell lines. Its authors said at the time (last year) that "By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies". The other study, the Cancer Cell Line Encyclopedia, tested 24 drugs against 1,036 cell lines. That one appeared at about the same time, and its authors said ". . .our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of ‘personalized’ therapeutic regimens."
Well, will they? As the latest paper shows, the two earlier efforts overlap to the extent of 15 drugs, 471 cell lines, 64 genes and the expression of 12,153 genes. How well do they match up? Unfortunately, the answer is "Not too well at all". The discrepancies really come out in the drug sensitivity data. The authors tried controlling for all the variables they could think of - cell line origins, dosing protocols, assay readout technologies, methods of estimating IC50s (and/or AUCs), specific mechanistic pathways, and so on. Nothing really helped. The two studies were internally consistent, but their cross-correlation was relentlessly poor.
It gets worse. The authors tried the same sort of analysis on several drugs and cell lines themselves, and couldn't match their own data to either of the published studies. Their take on the situation:
Our analysis of these three large-scale pharmacogenomic studies points to a fundamental problem in assessment of pharmacological drug response. Although gene expression analysis has long been seen as a source of ‘noisy’ data, extensive work has led to standardized approaches to data collection and analysis and the development of robust platforms for measuring expression levels. This standardization has led to substantially higher quality, more reproducible expression data sets, and this is evident in the CCLE and CGP data where we found excellent correlation between expression profiles in cell lines profiled in both studies.
The poor correlation between drug response phenotypes is troubling and may represent a lack of standardization in experimental assays and data analysis methods. However, there may be other factors driving the discrepancy. As reported by the CGP, there was only a fair correlation (rs < 0.6) between camptothecin IC50 measurements generated at two sites using matched cell line collections and identical experimental protocols. Although this might lead to speculation that the cell lines could be the source of the observed phenotypic differences, this is highly unlikely as the gene expression profiles are well correlated between studies.
Although our analysis has been limited to common cell lines and drugs between studies, it is not unreasonable to assume that the measured pharmacogenomic response for other drugs and cell lines assayed are also questionable. Ultimately, the poor correlation in these published studies presents an obstacle to using the associated resources to build or validate predictive models of drug response. Because there is no clear concordance, predictive models of response developed using data from one study are almost guaranteed to fail when validated on data from another study, and there is no way with available data to determine which study is more accurate. This suggests that users of both data sets should be cautious in their interpretation of results derived from their analyses.
"Cautious" is one way to put it. These are the sorts of testing platforms that drug companies are using to sort out their early-stage compounds and projects, and very large amounts of time and money are riding on those decisions. What if they're gibberish? A number of warning sirens have gone off in the whole biomarker field over the last few years, and this one should be so loud that it can't be ignored. We have a lot of issues to sort out in our cell assays, and I'd advise anyone who thinks that their own data are totally solid to devote some serious thought to the possibility that they're wrong.
Here's a Nature News summary of the paper, if you don't have access. It notes that the authors of the two original studies don't necessarily agree that they conflict! I wonder if that's as much a psychological response as a statistical one. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays
November 20, 2013
Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.
When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.
In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.
+ TrackBacks (0) | Category: Biological News | Chemical News
November 12, 2013
Nature Biotechnology is making it known that they're open to publishing studies with negative results. The occasion is their publication of this paper, which is an attempt to replicate the results of this work, published last year in Cell Research. The original paper, from Chen-Yu Zhang of Nanjing University, reported that micro-RNAs (miRNAs) from ingested plants could be taken up into the circulation of rodents, and (more specifically) that miRNA168a from rice could actually go on to modulate gene expression in the animals themselves. This was a very interesting (and controversial) result, with a lot of implications for human nutrition and for the use of transgenic crops, and it got a lot of press at the time.
But other researchers in the field were not buying these results, and this new paper (from miRagen Therapeutics and Monsanto) reports that they cannot replicated the Nanjing work at all. Here's their rationale for doing the repeat:
The naturally occurring RNA interference (RNAi) response has been extensively reported after feeding double-stranded RNA (dsRNA) in some invertebrates, such as the model organism Caenorhabditis elegans and some agricultural pests (e.g., corn rootworm and cotton bollworm). Yet, despite responsiveness to ingested dsRNA, a recent survey revealed substantial variation in sensitivity to dsRNA in other Caenorhabditis nematodes and other invertebrate species. In addition, despite major efforts in academic and pharmaceutical laboratories to activate the RNA silencing pathway in response to ingested RNA, the phenomenon had not been reported in mammals until a recent publication by Zhang et al. in Cell Research. This report described the uptake of plant-derived microRNAs (miRNA) into the serum, liver and a few other tissues in mice following consumption of rice, as well as apparent gene regulatory activity in the liver. The observation provided a potentially groundbreaking new possibility that RNA-based therapies could be delivered to mammals through oral administration and at the same time opened a discussion on the evolutionary impact of environmental dietary nucleic acid effects across broad phylogenies. A recently reported survey of a large number of animal small RNA datasets from public sources has not revealed evidence for any major plant-derived miRNA accumulation in animal samples. Given the number of questions evoked by these analyses, the limited success with oral RNA delivery for pharmaceutical development, the history of safe consumption for dietary small RNAs and lack of evidence for uptake of plant-derived dietary small RNAs, we felt further evaluation of miRNA uptake and the potential for cross-kingdom gene regulation in animals was warranted to assess the prevalence, impact and robustness of the phenomenon.
They believe that the expression changes that the original team noted in their rodents were due to the dietary changes, not to the presence of rice miRNAs, which they say that they cannot detect. Now, at this point, I'm going to exit the particulars of this debate. I can imagine that there will be a lot of hand-waving and finger-pointing, not least because these latest results come partly from Monsanto. You have only to mention that company's name to an anti-GMO activist, in my experience, to induce a shouting fit, and it's a real puzzle why saying "DeKalb" or "Pioneer Hi-Bred" doesn't do the same. But it's Monsanto who take the heat. Still, here we have a scientific challenge, which can presumably be answered by scientific means: does rice miRNA get into the circulation and have an effect, or not?
What I wanted to highlight, though, is another question that might have occurred to anyone reading the above. Why isn't this new paper in Cell Research, if they published the original one? Well, the authors apparently tried them, only to find their work rejected because (as they were told) "it is a bit hard to publish a paper of which the results are largely negative". That is a silly response, verging on the stupid. The essence of science is reproducibility, and if some potentially important result can't be replicated, then people need to know about it. The original paper had very big implications, and so does this one.
Note that although Cell Research is published out of Shanghai, it's part of the Nature group of journals. If two titles under the same publisher can't work something like this out, what hope is there for the rest of the literature? Congratulations to Nature Biotechnology, though, for being willing to publish, and for explicitly stating that they are open to replication studies of important work. Someone should be.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
October 31, 2013
Laura Helmuth has a provocative piece up at Slate with the title "Watch Francis Collins Lunge For a Nobel Prize". She points out that the NIH and the Smithsonian are making a big deal out of celebrating the "10th anniversary of the sequencing of the human genome", even though many people seem to recall the big deal being in 2001 - not 2003. Yep, that was when the huge papers came out in Science and Nature with all the charts and foldouts, and the big press conferences and headlines. February of 2001.
So why the "tenth anniversary" stuff this year? Well, 2003 is the year that the NIH team published its more complete version of the genome. That's the anniversary they've chosen to remember. If you start making a big deal out of 2001, you have to start making a big deal out of the race between that group and the Celera group - and you start having to, you know, share credit. Now, I make no claims for Craig Venter's personality or style. But I don't see how it can be denied that he and his group vastly sped up the sequencing of the genome, and arrived at a similar result in far less time than the NIH consortium. The two drafts of the sequence were published simultaneously, even though there seems to have been a lot of elbow-throwing by the NIH folks to keep that from happening.
The NIH has been hosting anniversary events all year, but the most galling anniversary claim is made in an exhibit that opened this year at the Smithsonian’s National Museum of Natural History, the second-most-visited museum in the world. (Dang that Louvre.) It’s called “Genome: Unlocking Life’s Code,” and the promotional materials claim, “It took nearly a decade, three billion dollars, and thousands of scientists to sequence the human genome in 2003.” (Disclosure: I worked for Smithsonian magazine while the exhibition, produced in partnership with the NIH, was being planned, and I consulted very informally with the curators. That is, we had lunch and I warned them they were being played.) To be clear, I’m delighted that the Smithsonian has an exhibit on the human genome. And I’m a huge fan of the NIH. (To its credit, the NIH did host an anniversary symposium in 2011.) But the Smithsonian exhibit enshrines the 2003 date in the country’s museum of record and minimizes the great drama and triumph of 2001.
Celebrating 2003 rather than 2001 as the most important date in the sequencing of the human genome is like celebrating the anniversary of the final Apollo mission rather than the first one to land on the moon. . .
No one is well served by pretending that things happened otherwise, or that 2003 is somehow the date of the "real" human genome. The race was on to publish in 2001, and the headlines were in 2001, and all the proclamations that the genome had at least been sequenced were in February of 2001. If, from some perspectives, that makes for a messier story, oh well. If we stripped all the messy stories out of the history books, what would be left?
Update: Matthew Herper has more on this. He's not as down on the NIH as Helmuth is, but he has some history lessons of his own.
+ TrackBacks (0) | Category: Biological News
October 23, 2013
G-protein coupled receptors are one of those areas that I used to think I understood, until I understood them better. These things are very far from being on/off light switches mounted in drywall - they have a lot of different signaling mechanisms, and none of them are simple, either.
One of those that's been known for a long time, but remains quite murky, is allosteric modulation. There are many compounds known that clearly are not binding at the actual ligand site in some types of GPCR, but (equally clearly) can affect their signaling by binding to them somewhere else. So receptors have allosteric sites - but what do they do? And what ligands naturally bind to them (if any)? And by what mechanism does that binding modulate the downstream signaling, and are there effects that we can take advantage of as medicinal chemists? Open questions, all of them.
There's a new paper in Nature that tries to make sense of this, and trying by what might be the most difficult way possible: through computational modeling. Not all that long ago, this might well have been a fool's errand. But we're learning a lot about the details of GPCR structure from the recent X-ray work, and we're also able to handle a lot more computational load than we used to. That's particularly true if we are David Shaw and the D. E. Shaw company, part of the not-all-that-roomy Venn diagram intersection of quantitative Wall Street traders and computational chemists. Shaw has the resources to put together some serious hardware and software, and a team of people to make sure that the processing units get frequent exercise.
They're looking at the muscarinic M2 receptor, an old friend of mine for which I produced I-know-not-how-many antagonist candidates about twenty years ago. The allosteric region is up near the surface of the receptor, about 15A from the acetylcholine binding site, and it looks like all the compounds that bind up there do so via cation/pi interactions with aromatic residues in the protein. (That holds true for compounds as diverse as gallamine, alcuronium, and strychnine), and the one shown in the figure. This is very much in line with SAR and mutagenesis results over the years, but there are some key differences. Many people had thought that the aromatic groups of the ligands the receptors must have been interacting, but this doesn't seem to be the case. There also don't seem to be any interactions between the positively charged parts of the ligands and anionic residues on nearby loops of the protein (which is a rationale I remember from my days in the muscarinic field).
The simulations suggest that the two sites are very much in communication with each other. The width and conformation of the extracellular vestibule space can change according to what allosteric ligand occupies it, and this affects whether the effect on regular ligand binding is positive or negative, and to what degree. There can also, in some cases, be direct electrostatic interactions between the two ligands, for the larger allosteric compounds. I was very glad to see that the Shaw group's simulations suggested some experiments: one set with modified ligands, which would be predicted to affect the receptor in defined ways, and another set with point mutations in the receptor, which would be predicted to change the activities of the known ligands. These experiments were carried out by co-authors at Monash University in Australia, and (gratifyingly) seem to confirm the model. Too many computational papers (and to be fair, too many non-computational papers) don't get quite to the "We made some predictions and put our ideas to the test" stage, and I'm glad this one does.
+ TrackBacks (0) | Category: Biological News | In Silico | The Central Nervous System
October 7, 2013
This year's Medicine Nobel is one that's been anticipated for some time. James Rothman of Yale, Randy W. Schekman of Berkeley, and Thomas C. Südhof of Stanford are cited for their fundamental discoveries in vesicular trafficking, and I can't imagine anyone complaining that it wasn't deserved. (The only controversy would be thanks, once again, to the "Rule of Three" in Alfred Nobel's will. Richard Scheller of Genentech has won prizes with Südhof and with Scheller for his work in the same field).
Here's the Nobel Foundation's scientific summary, and as usual, it's a good one. Vesicles are membrane-enclosed bubbles that bud off from cellular compartments and transport cargo to other parts of the cell (or outside it entirely), where they merge with another membrane and release their contents. There's a lot of cellular machinery involved on both the sending and receiving end, and that's what this year's winners worked out.
As it turns out, there are specific proteins (such as the SNAREs) imbedded in intracellular membranes that work as an addressing system: "tie up the membrane around this point and send the resulting globule on its way", or "stick here and start the membrane fusion process". This sort of thing is going on constantly inside the cell, and the up-to-the-surface-and-out variation is particularly noticeably in neurons, since they're constantly secreting neurotransmitters into the synapse. That latter process turned out to be very closely tied to signals like local calcium levels, which gives it the ability to be turned on and off quickly.
As the Nobel summary shows, a lot of solid cell biology had to be done to unravel all this. Scheckman looked for yeast cells that showed obvious mutations in their vesicle transport and tracked down what proteins had been altered. Rothman started off with a viral infection system that produced a lot of an easily-trackable protein, and once he'd identified others that helped to move it around, he used these as affinity reagents to find what bound to them in turn. This work dovetailed very neatly with the proteins that Scheckman's lab had identified, and suggested (as you'd figure) that this machinery was conserved across many living systems. Südhof then extended this work into the neurotransmitter area, discovering the proteins involved in the timing signals that are so critical in those cells, and demonstrating their function by generating mouse knockout models along the way.
The importance of all these processes to living systems can't be overstated. Eukaryotic cells have to be compartmentalized to function; there's too much going on for everything to be in the same stew pot all at the same time. So a system for "mailing" materials between those regions is vital. And in the same way, cells have to communicate with others, releasing packets of signaling molecules under very tight supervision, and that's done through many of the same mechanisms. You can trace the history of our understanding of these things through years of Nobel awards, and there will surely be more.
+ TrackBacks (0) | Category: Biological News | General Scientific News
September 18, 2013
Does it matter how a drug works, if it works? PTC Therapeutics seems bent on giving everyone an answer to that question, because there sure seem to be a lot of questions about how ataluren (PTC124), their Duchenne Muscular Dystrophy (DMD) therapy, acts. This article at Nature Biotechnology does an excellent job explaining the details.
Premature "stop" codons in the DNA of DMD patients, particularly in the dystrophin gene, are widely thought to be one of the underlying problems in the disease. (The same mechanism is believed to operate in many other genetic-mutation-driven conditions as well. Ataluren is supposed to promote "read-through" of these to allow the needed protein to be produced anyway. That's not a crazy idea at all - there's been a lot of thought about ways to do that, and several aminoglycoside antibiotics have been shown to work through that mechanism. Of that class, gentamicin has been given several tries in the clinic, to ambiguous effect so far.
So screening for a better enhancer of stop codon read-through seems like it's worth a shot for a disease with so few therapeutic options. PTC did this using a firefly luciferase (Fluc) reporter assay. As with any assay, there are plenty of opportunities to get false positives and false negatives. Firefly luciferase, as a readout, suffers from instability under some conditions. And if its signal is going to wink out on its own, then a compound that stabilizes it will look like a hit in your assay system. Unfortunately, there's no particular market in humans for a compound that just stabilizes firefly luciferase.
That's where the argument is with ataluren. Papers have appeared from a team at the NIH detailing trouble with the FLuc readout. That second paper (open access) goes into great detail about the mechanism, and it's an interesting one. FLuc apparently catalyzes a reaction between PTC124 and ATP, to give a new mixed anhydride adduct that is a powerful inhibitor of the enzyme. The enzyme's normal mechanism involves a reaction between luciferin and ATP, and since luciferin actually looks like something you'd get in a discount small-molecule screening collection, you have to be alert to something like this happening. The inhibitor-FLuc complex keeps the enzyme from degrading, but the new PTC124-derived inhibitor itself is degraded by Coenzyme A - which is present in the assay mixture, too. The end result is more luciferase signal that you expect versus the controls, which looks like a hit from your reporter gene system - but isn't. PTC's scientists have replied to some of these criticisms here.
Just to add more logs to the fire, other groups have reported that PTC124 seems to be effective in restoring read-through for similar nonsense mutations in other genes entirely. But now there's another new paper, this one from a different group at Dundee, claiming that ataluren fails to work through its putative mechanism under a variety of conditions, which would seem to call these results into question as well. Gentamicin works for them, but not PTC124. Here's the new paper's take-away:
In 2007 a drug was developed called PTC124 (latterly known as Ataluren), which was reported to help the ribosome skip over the premature stop, restore production of functional protein, and thereby potentially treat these genetic diseases. In 2009, however, questions were raised about the initial discovery of this drug; PTC124 was shown to interfere with the assay used in its discovery in a way that might be mistaken for genuine activity. As doubts regarding PTC124's efficacy remain unresolved, here we conducted a thorough and systematic investigation of the proposed mechanism of action of PTC124 in a wide array of cell-based assays. We found no evidence of such translational read-through activity for PTC124, suggesting that its development may indeed have been a consequence of the choice of assay used in the drug discovery process.
Now this is a mess, and it's complicated still more by the not-so-impressive performance of PTC124 in the clinic. Here's the Nature Biotechnology article's summary:
In 2008, PTC secured an upfront payment of $100 million from Genzyme (now part of Paris-based Sanofi) in return for rights to the product outside the US and Canada. But the deal was terminated following lackluster data from a phase 2b trial in DMD. Subsequently, a phase 3 trial in cystic fibrosis also failed to reach statistical significance. Because the drug showed signs of efficacy in each indication, however, PTC pressed ahead. A phase 3 trial in DMD is now underway, and a second phase 3 trial in cystic fibrosis will commence shortly.
It should be noted that the read-through drug space has other players in it as well. Prosensa/GSK and Sarepta are in the clinic with competing antisense oligonucleotides targeting a particular exon/mutation combination, although this would probably taken them into other subpopulations of DMD patients than PTC is looking to treat.
If they were to see real efficacy, PTC could have the last laugh here. To get back to the first paragraph of this post, if a compound works, well, the big argument has just been won. The company has in vivo data to show that some gene function is being restored, as well they should (you don't advance a compound to the clinic just on the basis of in vitro assay numbers, no matter how they look). It could be that the compound is a false positive in the original assay but manages to work through some other mechanism, although no one knows what that might be.
But as you can see, opinion is very much divided about whether PTC124 works at all in the real clinical world. If it doesn't, then the various groups detailing trouble with the early assays will have a good case that this compound never should have gotten as far as it did.
+ TrackBacks (0) | Category: Biological News | Business and Markets | Drug Assays | Drug Development
September 6, 2013
At C&E News, Lisa Jarvis has an excellent writeup on Warp Drive Bio and the whole idea of "cryptic natural products" (last blogged on here). As the piece makes clear, not everyone even is buying into the idea that there's a lot of useful-but-little-expressed natural product chemical matter out there, but since there could be, I'm glad that someone's looking.
Yet not everyone looked at the abundant gene clusters and saw a sea of drug candidates. The biosynthetic pathways defined by these genes are turned off most of the time. That inactivity caused skeptics to wonder how genome miners could be so sure they carried the recipes for medicinally important molecules.
Researchers pursuing genomics-based natural products say the answer lies in evolution and the environment. “These pathways are huge,” says Gregory L. Challis, a professor of chemical biology at the University of Warwick, in Coventry, England. With secondary metabolites encoded by as many as 150 kilobases of DNA, a bacterium would have to expend enormous amounts of energy to make each one.
Because they use so much energy, these pathways are turned on only when absolutely necessary. Traditional “grind and find” natural products discovery means taking bacteria out of their natural habitat—the complex communities where they communicate and compete for resources—and growing each strain in isolation. In this artificial setting, bacteria have no reason to expend energy to make anything other than what they need to survive.
“I absolutely, firmly believe that these compounds have a strong role to play in the environment in which these organisms live,” says Challis, who also continues to pursue traditional approaches to natural products. “Of course, not all bioactivities will be relevant to human medicine and agriculture, but many of them will be.”
The article also mentions that Novartis is working in this area, which I hadn't realized, as well as a couple of nonprofit groups. If there's something there, at any kind of reasonable hit rate, presumably one of these teams will find it?
+ TrackBacks (0) | Category: Biological News | Natural Products
September 5, 2013
If you haven't heard of CRISPR, you must not have to mess around with gene expression. And not everyone does, true, but we sure do count on that sort of thing in biomedical research. And this is a very useful new technique to do it:
In 2007, scientists from Danisco, a Copenhagen-based food ingredient company now owned by DuPont, found a way to boost the phage defenses of this workhouse microbe. They exposed the bacterium to a phage and showed that this essentially vaccinated it against that virus (Science, 23 March 2007, p. 1650). The trick has enabled DuPont to create heartier bacterial strains for food production. It also revealed something fundamental: Bacteria have a kind of adaptive immune system, which enables them to fight off repeated attacks by specific phages.
That immune system has suddenly become important for more than food scientists and microbiologists, because of a valuable feature: It takes aim at specific DNA sequences. In January, four research teams reported harnessing the system, called CRISPR for peculiar features in the DNA of bacteria that deploy it, to target the destruction of specific genes in human cells. And in the following 8 months, various groups have used it to delete, add, activate, or suppress targeted genes in human cells, mice, rats, zebrafish, bacteria, fruit flies, yeast, nematodes, and crops, demonstrating broad utility for the technique. Biologists had recently developed several new ways to precisely manipulate genes, but CRISPR's "efficiency and ease of use trumps just about anything," says George Church of Harvard University, whose lab was among the first to show that the technique worked in human cells.
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats, a DNA motif that turns up a lot in bacteria (and, interestingly, is almost universal in the Archaea). There are a number of genes associated with these short repeated spacers, which vary some across different types of bacteria, but all of them seem to be involved in the same sorts of processes. Some of the expressed proteins seem to work by chopping up infecting DNA sequences into chunks of about 30 base pairs, and these get inserted into the bacterial DNA near the start of the CRISPR region. RNAs get read off from them, and some of the other associated proteins are apparently there to process these RNAs into a form where they (and other associated proteins) can help to silence the corresponding DNA and RNA from an infectious agent. There are, as you can tell, still quite a few details to be worked out. Other bacteria may have some further elaborations that we haven't even come across yet. But the system appears to be widely used in nature, and quite robust.
The short palindromic repeats were first noticed back in 1987, but it wasn't until 2005 that it was appreciated that many of the sequences matched those found in bacteriophages. That was clearly no coincidence, and the natural speculation was that these bits were actually intended to be the front end for some sort of bacterial variant of RNA interference. So it has proven, and pretty rapidly, too. The Danisco team reported further results in 2007, although as that Science article points out, they now say that they didn't come close to appreciating the technique's full potential. By 2011 the details of the Cas9-based CRISPR system were becoming clear. Just last year, the key proof-of-principle work was published, showing that an engineered "guide RNA" was enough to target specific DNA sequences with excellent specificity. And in February, the Church group at Harvard published their work on a wide range of genetic targets across several human cell lines, simultaneously with another multicenter team (Harvard, Broad and Mcgovern Institutes, Columbia, Tsinghua, MIT, Columbia, Rockefeller) that reported similar results across a range of mammalian cells.
Work in this field since those far-off days of last February has done nothing but accelerate. Here's an Oxford group (and one from Wisconsin) applying CRISPR all over the Drosophia genome. Here's Church's group doing it to yeast. There are several zebrafish papers that have appeared so far this year, and here's the Whitehead/MIT folks applying it to mouse zygotes, in a technique that they've already refined. Methods for enhancing expression as well as shutting it down are already being reported as well.
So we could be looking at a lot of things here. Modifying cell lines has just gotten easier, which is good news. It looks like genetically altered rodent models could be produced much more quickly and selectively, which would be welcome, and there seems no reason not to apply this to all sorts of other model organisms as well. That takes us from the small stuff (like the fruit flies and yeast) all the way up past mice, and then, well, you have to wonder about gene therapy in humans. Unless I'm very much mistaken, people are already forming companies aiming at just this sort of thing. Outside of direct medical applications, CRISPR also looks like it's working in important plant species, leading to a much faster and cleaner way to genetically modify crops of all kinds. If this continues to work out at the pace it has already, the Nobel people will have the problem of figuring out how to award the eventual prize. Or prizes.
+ TrackBacks (0) | Category: Biological News
August 20, 2013
Here's a paper that asks whether GPCRs are still a source of new targets. As you might guess, the answer is "Yes, indeed". (Here's a background post on this area from a few years ago, and here's my most recent look at the area).
It's been a famously productive field, but the distribution is pretty skewed:
From a total of 1479 underlying targets for the action of 1663 drugs, 109 (7%) were GPCRs or GPCR related (e.g., receptor-activity modifying proteins or RAMPs). This immediately reveals an issue: 26% of drugs target GPCRs, but they account for only 7% of the underlying targets. The results are heavily skewed by certain receptors that have far more than their “fair share” of drugs. The most commonly targeted receptors are as follows: histamine H1 (77 occurrences), α1A adrenergic (73), muscarinic M1 (72), dopamine D2 (62), muscarinic M2 (60), 5HT2a (59), α2A adrenergic (56), and muscarinic M3 (55)—notably, these are all aminergic GPCRs. Even the calculation that the available drugs exert their effects via 109 GPCR or GPCR-related targets is almost certainly an overestimate since it includes a fair proportion where there are only a very small number of active agents, and they all have a pharmacological action that is “unknown”; in truth, we have probably yet to discover an agent with a compelling activity at the target in question, let alone one with exactly the right pharmacology and appropriately tuned pharmacokinetics (PK), pharmacodynamics (PD), and selectivity to give clinical efficacy for our disease of choice. A prime example of this would be the eight metabotropic (mGluR) receptors, many of which have only been “drugged” according to this analysis due to the availability of the endogenous ligand (L-glutamic acid) as an approved nutraceutical. There are also a considerable number of targets for which the only known agents are peptides, rather than small molecules. . .
Of course, since we're dealing with cell-surface receptors, peptides (and full-sized proteins) have a better shot at becoming drugs in this space.
Of the 437 drugs found to target GPCRs, 21 are classified as “biotech” (i.e., biopharmaceuticals) with the rest as “small molecules.” However, that definition seems rather generous given that the molecular weight (MW) of the “small molecules” extends as high as 1623. Using a fairly modest threshold of MW <600 suggests that ~387 are more truly small molecules and ~50 are non–small molecules, being roughly an 80:20 split. Pursuing the 20%, while not being novel targets/mechanisms, could still provide important new oral/small-molecule medications with the comfort of excellent existing clinical validation. . .
The paper goes on to mention many other possible modes for drug action - allosteric modulators, GPCR homo- and heterodimerization, other GPCR-protein interactions, inverse agonists and the like, alternative signaling pathways other than the canonical G-proteins, and more. It's safe to say that all this will keep up busy for a long time to come, although working up reliable assays for some of these things is no small matter.
+ TrackBacks (0) | Category: Biological News | Drug Assays
August 16, 2013
Structural biology needs no introduction for people doing drug discovery. This wasn't always so. Drugs were discovered back in the days when people used to argue about whether those "receptor" thingies were real objects (as opposed to useful conceptual shorthand), and before anyone had any idea of what an enzyme's active site might look like. And even today, there are targets, and whole classes of targets, for which we can't get enough structural information to help us out much.
But when you can get it, structure can be a wonderful thing. X-ray crystallography of proteins, and protein-ligand complexes has revealed so much useful information that it's hard to know where to start. It's not the magic wand - you can't look at an empty binding site and just design something right at your desk that'll be a potent ligand right off the bat. And you can't look at a series of ligand-bound structures and say which one is the most potent, not in most situations, anyway. But you still learn things from X-ray structures that you could never have known otherwise.
It's not the only game in town, either. NMR structures are very useful, although the X-ray ones can be easier to get, especially in these days of automated synchroton beamlines and powerful number-crunching. But what if your protein doesn't crystallize? And what if there are things happening in solution that you'd never pick up on from the crystallized form? You're not going to watch your protein rearrange into a new ligand-bound conformation with X-ray crystallography, that's for sure. No, even though NMR structures can be a pain to get, and have to be carefully interpreted, they'll also show you things you'd never had seen.
And there are more exotic methods. Earlier this summer, there was a startling report of a structure of the HIV surface proteins gp120 and gp41 obtained through cryogenic electron microscopy. This is a very important and very challenging field to work in. What you've got there is a membrane-bound protein-protein interaction, which is just the sort of thing that the other major structure-determination techniques can't handle well. At the same time, though, the number of important proteins involved in this sort of thing is almost beyond listing. Cryo-EM, since it observes the native proteins in their natural environment, without tags or stains, has a lot of potential, but it's been extremely hard to get the sort of resolution with it that's needed on such targets.
Joseph Sodroski's group at Harvard, longtime workers in this area, published their 6-angstrom-resolution structure of the protein complex in PNAS. But according to this new article in Science, the work has been an absolute lightning rod ever since it appeared. Many other structural biologists think that the paper is so flawed that it never should have seen print. No, I'm not exaggerating:
Several respected HIV/AIDS researchers are wowed by the work. But others—structural biologists in particular—assert that the paper is too good to be true and is more likely fantasy than fantastic. "That paper is complete rubbish," charges Richard Henderson, an electron microscopy pioneer at the MRC Laboratory of Molecular Biology in Cambridge, U.K. "It has no redeeming features whatsoever."
. . .Most of the structural biologists and HIV/AIDS researchers Science spoke with, including several reviewers, did not want to speak on the record because of their close relations with Sodroski or fear that they'd be seen as competitors griping—and some indeed are competitors. Two main criticisms emerged. Structural biologists are convinced that Sodroski's group, for technical reasons, could not have obtained a 6-Å resolution structure with the type of microscope they used. The second concern is even more disturbing: They solved the structure of a phantom molecule, not the trimer.
Cryo-EM is an art form. You have to freeze your samples in an aqueous system, but without making ice. The crystals of normal ice formation will do unsightly things to biological samples, on both the macro and micro levels, so you have to form "vitreous ice", a glassy amorphous form of frozen water, which is odd enough that until the 1980s many people considered it impossible. Once you've got your protein particles in this matrix, though, you can't just blast away at full power with your electron beam, because that will also tear things up. You have to take a huge number of runs at lower power, and analyze them through statistical techniques. The Sodolski HIV structure, for example, is the product of 670,000 single-particle images.
But its critics say that it's also the product of wishful thinking.:
The essential problem, they contend, is that Sodroski and Mao "aligned" their trimers to lower-resolution images published before, aiming to refine what was known. This is a popular cryo-EM technique but requires convincing evidence that the particles are there in the first place and rigorous tests to ensure that any improvements are real and not the result of simply finding a spurious agreement with random noise. "They should have done lots of controls that they didn't do," (Sriram) Subramaniam asserts. In an oft-cited experiment that aligns 1000 computer-generated images of white noise to a picture of Albert Einstein sticking out his tongue, the resulting image still clearly shows the famous physicist. "You get a beautiful picture of Albert Einstein out of nothing," Henderson says. "That's exactly what Sodroski and Mao have done. They've taken a previously published structure and put atoms in and gone down into a hole." Sodroski and Mao declined to address specific criticisms about their studies.
Well, they decline to answer them in response to a news item in Science. They've indicated a willingness to take on all comers in the peer-reviewed literature, but otherwise, in print, they're doing the we-stand-by-our-results-no-comment thing. Sodroski himself, with his level of experience in the field, seems ready to defend this paper vigorously, but there seem to be plenty of others willing to attack. We'll have to see how this plays out in the coming months - I'll update as things develop.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico | Infectious Diseases
August 14, 2013
The technique of using engineered T cells against cancerous cells may be about to explode ever more than it has already. One of the hardest parts of getting this process scaled up has been the need to extract each patient's own T cells and reprogram them. But in a new report in Nature Biotechnology, a team at Sloan-Kettering shows that they can raise cells of this type from stem cells, which were themselves derived from T lymphocytes from another healthy donor. As The Scientist puts it:
Sadelain’s team isolated T cells from the peripheral blood of a healthy female donor and reprogrammed them into stem cells. The researchers then used disabled retroviruses to transfer to the stem cells the gene that codes for a chimeric antigen receptor (CAR) for the antigen CD19, a protein expressed by a different type of immune cell—B cells—that can turn malignant in some types of cancer, such as leukemia. The receptor for CD19 allows the T cells to track down and kill the rogue B cells. Finally, the researchers induced the CAR-modified stem cells to re-acquire many of their original T cell properties, and then replicated the cells 1,000-fold.
“By combining the CAR technology with the iPS technology, we can make T cells that recognize X, Y, or Z,” said Sadelain. “There’s flexibility here for redirecting their specificity towards anything that you want.”
You'll note the qualifications in that extract. The cells that are produced in this manner aren't quite the same as the ones you'd get by re-engineering a person's own T-cells. We may have to call them "T-like" cells or something, but in a mouse lymphoma model, they most certainly seem to do the job that you want them to. It's going to be harder to get these to the point of trying them out in humans, since they're a new variety of cell entirely, but (on the other hand) the patients you'd try this in are not long for this world and are, in many cases, understandably willing to try whatever might work.
Time to pull the camera back a bit. It's early yet, but these engineered T-cell approaches are very impressive. This work, if it holds up, will make them a great easier to implement. No doubt, at this moment, there are Great Specific Antigen Searches underway to see what other varieties of cancer might respond to this technique. And this, remember, is not the only immunological approach that's showing promise, although it must be the most dramatic.
So. . .we have to consider a real possibility that the whole cancer-therapy landscape could be reshaped over the next decade or two. Immunology has the potential to disrupt the whole field, which is fine by me, since it could certainly use some disruption, given the state of the art. Will we look back, though, and see an era where small-molecule therapies gave people an extra month here, an extra month there, followed by one where harnessing the immune system meant sweeping many forms of cancer off the board entirely? Speed the day, I'd say - but if you're working on those small-molecule therapies, you should keep up with these developments. It's not time to consider another line of research, not yet. But the chances of having to do this, at some point, are not zero. Not any more.
+ TrackBacks (0) | Category: Biological News | Cancer
August 1, 2013
Everyone in biomedical research is familiar with "knockout" mice, animals that have had a particular gene silenced during their development. This can be a powerful way of figuring out what that gene's product actually does, although there are always other factors at work. The biggest one is how other proteins and pathways can sometimes compensate for the loss, a process often doesn't have a chance to kick in when you come right into an adult animal and block a pathway through other means. In some other cases, a gene knockout turns out to be embryonic-lethal, but can be tolerated in an adult animal, once some key development pathway has run its course.
There have been a lot of knockout mice over the years. Targeted genetic studies have described functions for thousands of mouse genes. But when you think about it, there have surely been many of these whose phenotypes have not really been noticed or studied in the right amount of detail. Effects can be subtle, and there's an awful lot to look for. That's the motivation behind the Sanger Institute Mouse Genetics Project, who have a new paper out here. They're part of the even larger International Mouse Phenotyping Consortium, which is co-ordinating efforts like this across several sites.
Update: here's an overview of the work being done. For generating knockout animals, you have the International Knockout Mouse Consortium at an international level - the IKMC, mentioned above, is the phenotyping arm of the effort. In the US, the NIH-funded Knockout Mouse Project (KOMP) is a major effort, and in Europe you have the European Conditional Mouse Mutagenesis Program (EUCOMM), which has evolved into EUCOMMTOOLS. Then in Canada you have NorCOMM, and TIGM at Texas A&M.
I like the way that last link's abstract starts: "Nearly 10 years after the completion of the human genome project, and the report of a complete sequence of the mouse genome, it is salutary to reflect that we remain remarkably ignorant of the function of most genes in the mammalian genome." That's absolutely right, and these mouse efforts are an attempt to address that directly. The latest paper describes the viability of 489 mutants, and a more complete analysis of 250 of them - still only a tiny fraction of what's out there, but enough to give you a look behind the curtain.
29% of the mutants were lethal and 13% were subviable, producing only a fraction of the expected number of embryos. That's pretty much in line with earlier estimates, so that figure will probably hold up. As for fertility, a bit over 5% of the homozygous crosses were infertile - and in almost all cases, the trouble was in the males. (All the heterozygotes could produce offspring).
The full phenotypic analysis on the first 250 mutants is quite interesting (and can be found at the Sanger Mouse Portal site.. Most of these are genes with some known function, but 34 of them have not had anything assigned to them until now. These animals were assessed through blood chemistry, gene expression profiling, dietary and infectious disease challenges, behavioral tests, necropsy and histopathology, etc. Among the most common changes were body weight and fat/lean ratios (mostly on the underweight side), but there were many others. (That body weight observation is, in most cases, almost certainly not a primary effect. Reproductive and musculoskeletal defects were the most common categories that were likely to be front-line problems).
What stands out is that the unassigned genes seemed to produce noticeable phenotypic changes at the same rate as the known ones, and that even the studied genes turned up effects that hadn't been realized. As the paper says, these results "reveal our collective inability to predict phenotypes based on sequence or expression pattern alone." About 35% of the mutants (of all kinds) showed no detectable phenotypic changes, so these are either nonessential genes or had phenotypes that escaped the screens. The team looked at heterozygotes in cases where the homozygotes were lethal or nearly so (90 lines so far), and haploinsufficiency (problems due to only one working copy of a gene) was a common effect, seen in over 40% of those mutants.
Genes with some closely related paralog were found to be less likely to be essential, but those producing a protein known to be part of a protein complex were more likely to be so. Both of those results make sense. But a big question is how well these results will translate to understanding of human disease, and that's still an open issue. Clearly, many things will be directly applicable, but some care will be needed:
The data set reported here includes 59 orthologs of known human disease genes. We compared our data with human disease features described in OMIM. Approximately half (27) of these mutants exhibited phenotypes that were broadly consistent with the human phenotype. However, many additional phenotypes were detected in the mouse mutants suggesting additional features that might also occur in patients that have hitherto not been reported. Interestingly, a large proportion of genes underlying recessive disorders in humans are homozygous lethal in mice (17 of 37 genes), possibly because the human mutations are not as disruptive as the mouse alleles.
As this work goes on, we're going to learn a lot about mammalian genetics that has been hidden. The search for similar effects in humans will be going on simultaneously, informed by the mouse results. Doing all this is going to keep a lot of people busy for a long time - but understanding what comes out is going to be an even longer-term occupation. Something to look forward to!
+ TrackBacks (0) | Category: Biological News
July 18, 2013
Thanks to an alert reader, I was put on to this paper in PNAS. It's from a team at Washington U. in St. Louis, and my fellow Cardinals fans are definitely stirring things up in the debate over "junk DNA" function and the ENCODE results. (The most recent post here on the debate covered the "It's functional" point of view - for links to previous posts on some vigorous ENCODE-bashing publications, see here).
This new paper, blogged about here at Homologus and here by one of its authors, Mike White, is an attempt to run a null-hypothesis experiment on transcription factor function. There are a lot of transcription factor recognition sequences in the genome. They're short DNA sequences that serve as flags for the whole transcription machinery to land and start assembling at a particular spot. Transcription factors themselves are the proteins that do the primary recognition of these sequences, and that gives them plenty to do. With so many DNA motifs out there (and so many near-misses), some of their apparent targets are important and real and some of them may well be noise. TFs have their work cut out.
What this new paper did was look at a particular transcription factor, Crx. They took a set of 1,300 sequences that are (functionally) known to bind it - 865 of them with the canonical recognition motifs and 433 of them that are known to bind, but don't have the traditional motif. They compared that set to 3,000 control sequences, including 865 of them "specifically chosen to match the Crx motif content and chromosomal distribution" as compared to that first set. They also included a set of single-point mutations of the known binding sequences, along with sets of scrambled versions of both the known binding regions and the matched controls above, with dinucleotide ratios held constant - random but similar.
What they found, first, was that the known binding elements do indeed drive transcription, as advertised, while the controls don't. But the ENCODE camp has a broader definition of function than just this, and here's where the dinucleotides hit the fan. When they looked at gene repression activity, they found that the 865 binders and the 865 matched controls (with Crx recognition elements, but in unbound regions of the genome) both showed similar amounts of activity. As the paper says, "Overall, our results show that both bound and unbound Crx motifs, removed from their genomic context, can produce repression, whereas only bound regions can strongly activate".
So far, so good, and nothing that the ENCODE people might disagree with - I mean, there you are, unbound regions of the genome showing functional behavior and all. But the problem is, most of the 1,300 random sequences also showed regulatory effects:
Our results demonstrate the importance of comparing the activity of candidate CREs (cis-regulatory elements - DBL) against distributions of control sequences, as well as the value of using multiple approaches to assess the function of CREs. Although scrambled DNA elements are unlikely to drive very strong levels of activation or repression, such sequences can produce distinct levels of enhancer activity within an intermediate range that overlaps with the activity of many functional sequences. Thus, function cannot be assessed solely by applying a threshold level of activity; additional approaches to characterize function are necessary, such as mutagenesis of TF binding sites.
In other words, to put it more bluntly than the paper does, one could generate ENCODE-like levels of functionality with nothing but random DNA. These results will not calm anyone down, but it's not time to calm down just yet. There are some important issues to be decided here - from theoretical biology all the way down to how many drug targets we can expect to have. I look forward to the responses to this work. Responses will most definitely be forthcoming.
+ TrackBacks (0) | Category: Biological News
July 11, 2013
There hasn't been much news about Warp Drive Bio since their founding. And that founding was a bit of an unusual event all by itself, since the company was born with a Sanofi deal already in place (and an agreement for them to buy the company if targets were met). But now things seem to be happening. Greg Verdine, a founder, has announced that he's taking a three-year leave of absence from Harvard to become the company's CEO. They've also brought in some other big names, such as Julian Adams (Millennium/Infinity) to be on the board of directors.
The company has a very interesting research program: they're hoping to coax out cryptic natural products from bacteria and the like, molecules that aren't being found in regular screening efforts because the genes used in their biosynthetic pathways are rarely activated. Warp Drive's plan is to sequence heaps of prokaryotes, identify the biosynthesis genes, and activate them to produce rare and unusual natural products as drug candidates. (I'm reminded of this recent work on forcing fungi to produce odd products by messing with their epigenetic enzymes, although I'm not sure if that's what Warp Drive has in mind specifically). And the first part of that plan is what the company has been occupying itself with over the last few months:
“These are probably really just better molecules, and always were better,” he says. “The problems were that they took too long to discover and that one was often rediscovering the same things over and over again.”
Verdine explains the reason this happened is because many of the novel genes in the bacteria aren’t expressed, and remain “dark,” or turned off, and thus can’t be seen. By sequencing the microbes’ genetic material, however, Warp Drive can illuminate them, and find the roadmap needed to make a number of drugs.
“They’re there, hiding in plain sight,” Verdine says.
Over the past year and a half, Warp Drive has sequenced the entire genomes of more than 50,000 bacteria, most of which come from dirt. That library represents the largest collection of such data in existence, according to Verdine.
The entire genomes of 50,000 bacteria? I can well believe that this is the record. That is a lot of data, even considering that bacterial genomes don't run that large. My guess is that the rate-limiting step in all this is going to be a haystack problem. There are just so many things that one could potentially work on - how do you sort them out? Masses of funky natural product pathways (whose workings may not be transparent), producing masses of funky natural products, of unknown function: there's a lot to keep people busy here. But if there really is a dark-matter universe of natural products, it really could be worth exploring - the usual one certainly has been a good thing over the years, although (as noted above) it's been suffering from diminishing returns for a while.
But there's something else I wondered about when Warp Drive was founded: Verdine himself has been involved in founding several other companies, and there's another one going right here in Cambridge: Aileron Therapeutics, the flagship of the stapled-peptide business (an interesting and sometimes controversial field). How are they doing? They recently got their first compound through Phase I, after raising more money for that effort last year.
The thing is, I've heard from more than one person recently that all isn't well over there, that they're cutting back research. I don't know if that's the circle-the-wagons phase that many small companies go through when they're trying to take their first compound through the clinic, or a sign of something deeper. Anyone with knowledge, feel free to add it in the comments section. . .
Update: Prof. Verdine emails me to note that he's officially parted ways with Aileron since 2010, to avoid conflicts of interest with his other venture capital work. His lab has continued to investigate stapled peptides on their own, though.
+ TrackBacks (0) | Category: Biological News | Business and Markets | Natural Products
July 1, 2013
Another cannon has gone off in the noncoding-genome wars. Here's a paper in PLOS Genetics detailing what the authors are calling Long Intergenic Noncoding RNAs (lincRNAs):
Known protein coding gene exons compose less than 3% of the human genome. The remaining 97% is largely uncharted territory, with only a small fraction characterized. The recent observation of transcription in this intergenic territory has stimulated debate about the extent of intergenic transcription and whether these intergenic RNAs are functional. Here we directly observed with a large set of RNA-seq data covering a wide array of human tissue types that the majority of the genome is indeed transcribed, corroborating recent observations by the ENCODE project. Furthermore, using de novo transcriptome assembly of this RNA-seq data, we found that intergenic regions encode far more long intergenic noncoding RNAs (lincRNAs) than previously described, helping to resolve the discrepancy between the vast amount of observed intergenic transcription and the limited number of previously known lincRNAs. In total, we identified tens of thousands of putative lincRNAs expressed at a minimum of one copy per cell, significantly expanding upon prior lincRNA annotation sets. These lincRNAs are specifically regulated and conserved rather than being the product of transcriptional noise. In addition, lincRNAs are strongly enriched for trait-associated SNPs suggesting a new mechanism by which intergenic trait-associated regions may function.
Emphasis added, because that's been one of the key points in this debate. The authors regard the ENCODE data as "firmly establishing the reality of pervasive transcription", so you know where their sympathies lie. And their results are offered up as a strong corroboration of the ENCODE work, with lincRNAs serving as the, well, missing link.
One thing I notice is that these new data strongly suggest that many of these RNAs are expressed at very low levels. The authors set cutoffs for "fragments per kilobase of transcript per million mapped reads" (FPKM), discarding everything that came out as less than 1 (roughly one copy per cell). The set of RNAs with FPKM>1 is over 50,000. If you ratchet up a bit, things drop off steeply, though. FPKM>10 knocks that down to between three and four thousand, and FPKM>30 give you 925 lincRNAs. My guess is that those are where the next phase of this debate will take place, since those expression levels get you away from the noise. But the problem is that the authors are explicitly making the case for thousands upon thousands of lincRNAs being important, and this interpretation won't be satisfied with everyone agreeing on a few hundred new transcripts. These things also seem to be very tissue-specific, so it looks like the arguing is going to get very granular indeed.
Here's a quote from the paper that sums up the two worldviews that are now fighting it out:
Almost half of all trait-associated SNPs (TASs) identified in genome-wide association studies are located in intergenic sequence while only a small portion are in protein coding gene exons. This curious observation points to an abundance of functional elements in intergenic sequence.
Or that curious observation could be telling you that there's something wrong with your genome-wide association studies. I lean towards that view, but the battles aren't over yet.
+ TrackBacks (0) | Category: Biological News
June 17, 2013
That's my take-away from this paper, which takes a deep look at a reconstituted beta-adrenergic receptor via fluorine NMR. There are at least four distinct states (two inactive ones, the active one, and an intermediate), and the relationships between them are different with every type of ligand that comes in. Even the ones that look similar turn out to have very different thermodynamics on their way to the active state. If you're into receptor signaling, you'll want to read this one closely - and if you're not, or not up for it, just take away the idea that the landscape is not a simple one. As you'd probably already guessed.
Note: this is a multi-institution list of authors, but it did catch my eye that David Shaw of Wall Street's D. E. Shaw does make an appearance. Good to see him keeping his hand in!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico
June 13, 2013
Single-molecule techniques are really the way to go if you're trying to understand many types of biomolecules. But they're really difficult to realize in practice (a complaint that should be kept in context, given that many of these experiments would have sounded like science fiction not all that long ago). Here's an example of just that sort of thing: watching DNA polymerase actually, well, polymerizing DNA, one base at a time.
The authors, a mixed chemistry/physics team at UC Irvine, managed to attach the business end (the Klenow fragment) of DNA Polymerase I to a carbon nanotube (a mutated Cys residue and a maleimide on the nanotube did the trick). This give you the chance to use the carbon nanotube as a field effect transistor, with changes in the conformation of the attached protein changing the observed current. It's stuff like this, I should add, that brings home to me the fact that it really is 2013, the relative scarcity of flying cars notwithstanding.
The authors had previously used this method to study attached lysozyme molecules (PDF, free author reprint access). That second link is a good example of the sort of careful brush-clearing work that has to be done with a new system like this: how much does altering that single amino acid change the structure and function of the enzyme you're studying? How do you pick which one to mutate? Does being up against the side of a carbon nanotube change things, and how much? It's potentially a real advantage that this technique doesn't require a big fluorescent label stuck to anything, but you have to make sure that attaching your test molecule to a carbon nanotube isn't even worse.
It turns out, reasonably enough, that picking the site of attachment is very important. You want something that'll respond conformationally to the actions of the enzyme, moving charged residues around close to the nanotube, but (at the same time) it can't be so crucial and wide-ranging that the activity of the system gets killed off by having these things so close, either. In the DNA polymerase study, the enzyme was about 33% less active than wild type.
And the authors do see current variations that correlate with what should be opening and closing of the enzyme as it adds nucleotides to the growing chain. Comparing the length of the generated DNA with the FET current, it appears that the enzyme incorporates a new base at least 99.8% of the time it tries to, and the mean time for this to happen is about 0.3 milliseconds. Interestingly, A-T pair formation takes a consistently longer time than C-G does, with the rate-limiting step occurring during the open conformation of the enzyme in each case.
I look forward to more applications of this idea. There's a lot about enzymes that we don't know, and these sorts of experiments are the only way we're going to find out. At present, this technique looks to be a lot of work, but you can see it firming up before your eyes. It would be quite interesting to pick an enzyme that has several classes of inhibitor and watch what happens on this scale.
It's too bad that Arthur Kornberg, the discoverer of DNA Pol I, didn't quite live to see such an interrogation of the enzyme; he would have enjoyed it very much, I think. As an aside, that last link, with its quotes from the reviewers of the original manuscript, will cheer up anyone who's recently had what they thought was a good paper rejected by some journal. Kornberg's two papers only barely made it into JBC, but one year after a referee said "It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA", Kornberg was awarded the Nobel for just that.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Scientific Literature
May 22, 2013
Just how many different small-molecule binding sites are there? That's the subject of this new paper in PNAS, from Jeffrey Skolnick and Mu Gao at Georgia Tech, which several people have sent along to me in the last couple of days.
This question has a lot of bearing on questions of protein evolution. The paper's intro brings up two competing hypotheses of how protein function evolved. One, the "inherent functionality model", assumes that primitive binding pockets are a necessary consequence of protein folding, and that the effects of small molecules on these (probably quite nonspecific) motifs has been honed by evolutionary pressures since then. (The wellspring of this idea is this paper from 1976, by Jensen, and this paper will give you an overview of the field). The other way it might have worked, the "acquired functionality model", would be the case if proteins tend, in their "unevolved" states, to be more spherical, in which case binding events must have been much more rare, but also much more significant. In that system, the very existence of binding pockets themselves is what's under the most evolutionary pressure.
The Skolnick paper references this work from the Hecht group at Princeton, which already provides evidence for the first model. In that paper, a set of near-random 4-helical-bundle proteins was produced in E. coli - the only patterning was a rough polar/nonpolar alternation in amino acid residues. Nonetheless, many members of this unplanned family showed real levels of binding to things like heme, and many even showed above-background levels of several types of enzymatic activity.
In this new work, Skolnick and Gao produce a computational set of artificial proteins (called the ART library in the text), made up of nothing but poly-leucine. These were modeled to the secondary structure of known proteins in the PDB, to produce natural-ish proteins (from a broad structural point of view) that have no functional side chain residues themselves. Nonetheless, they found that the small-molecule-sized pockets of the ART set actually match up quite well with those found in real proteins. But here's where my technical competence begins to run out, because I'm not sure that I understand what "match up quite well" really means here. (If you can read through this earlier paper of theirs at speed, you're doing better than I can). The current work says that "Given two input pockets, a template and a target, (our algorithm) evaluates their PS-score, which measures the similarity in their backbone geometries, side-chain orientations, and the chemical similarities between the aligned pocket-lining residues." And that's fine, but what I don't know is how well it does that. I can see poly-Leu giving you pretty standard backbone geometries and side-chain orientations (although isn't leucine a little more likely than average to form alpha-helices?), but when we start talking chemical similarities between the pocket-lining residues, well, how can that be?
But I'm even willing to go along with the main point of the paper, which is that there are not-so-many types of small-molecule binding pockets, even if I'm not so sure about their estimate of how many there are. For the record, they're guessing not many more than about 500. And while that seems low to me, it all depends on what we mean by "similar". I'm a medicinal chemist, someone who's used to seeing "magic methyl effects" where very small changes in ligand structure can make big differences in binding to a protein. And that makes me think that I could probably take a set of binding pockets that Skolnick's people would call so similar as to be basically identical, and still find small molecules that would differentiate them. In fact, that's a big part of my job.
But in general, I see the point they're making, but it's one that I've already internalized. There are a finite number of proteins in the human body. Fifty thousand? A couple of hundred thousand? Probably not a million. Not all of these have small-molecule binding sites, for sure, so there's a smaller set to deal with right there. Even if those binding sites were completely different from one another, we'd be looking at a set of binding pockets in the thousands/tens of thousands range, most likely. But they're not completely different, as any medicinal chemist knows: try to make a selective muscarinic agonist, or a really targeted serine hydrolase inhibitor, and you'll learn that lesson quickly. And anyone who's run their drug lead through a big selectivity panel has seen the sorts of off-target activities that come up: you hit someof the other members of your target's family to greater or lesser degree. You hit the flippin' sigma receptor, not that anyone knows what that means. You hit the hERG channel, and good luck to you then. Your compound is a substrate for one of the CYP enzymes, or it binds tightly to serum albumin. Who has even seen a compound that binds only to its putative target? And this is only with the counterscreens we have, which is a small subset of the things that are really out there in cells.
And that takes me to my main objection to this paper. As I say, I'm willing to stipulate, gladly, that there are only so many types of binding pockets in this world (although I think that it's more than 500). But this sort of thing is what I have a problem with:
". . .we conclude that ligand-binding promiscuity is likely an inherent feature resulting from the geometric and physical–chemical properties of proteins. This promiscuity implies that the notion of one molecule–one protein target that underlies many aspects of drug discovery is likely incorrect, a conclusion consistent with recent studies. Moreover, within a cell, a given endogenous ligand likely interacts at low levels with multiple proteins that may have different global structures.
"Many aspects of drug discovery" assume that we're only hitting one target? Come on down and try that line out in a drug company, and be prepared for rude comments. Believe me, we all know that our compounds hit other things, and we all know that we don't even know the tenth of it. This is a straw man; I don't know of anyone doing drug discovery that has ever believed anything else. Besides, there are whole fields (CNS) where polypharmacy is assumed, and even encouraged. But even when we're targeting single proteins, believe me, no one is naive enough to think that we're hitting those alone.
Other aspects of this paper, though, are fine by me. As the authors point out, this sort of thing has implications for drawing evolutionary family trees of proteins - we should not assume too much when we see similar binding pockets, since these may well have a better chance of being coincidence than we think. And there are also implications for origin-of-life studies: this work (and the other work in the field, cited above) imply that a random collection of proteins could still display a variety of functions. Whether these are good enough to start assembling a primitive living system is another question, but it may be that proteinaceous life has an easier time bootstrapping itself than we might imagine.
+ TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It
May 15, 2013
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Chemical News | Drug Assays
May 13, 2013
I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.
Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)
But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.
That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?
+ TrackBacks (0) | Category: Biological News
April 25, 2013
A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.
When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.
So far, so good. But here's where, conceptually, things start going into the ditch:
Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.
Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:
Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .
. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.
Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".
What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.
I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.
+ TrackBacks (0) | Category: Biological News
April 23, 2013
Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.
Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:
Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.
And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).
One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.
And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.
Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History
There's a possible new area for drug discovery that's coming from a very unexpected source: enzymes that don't do anything. About ten years ago, when the human genome was getting its first good combing-through, one of the first enzyme categories to get the full treatment were the kinases. But about ten per cent of them, on closer inspection, seemed to lack one or more key catalytic residues, leaving them with no known way to be active. They were dubbed (with much puzzlement) "pseudokinases", with their functions, if any, unknown.
As time went on and sequences piled up, the same situation was found for a number of other enzyme categories. One family in particular, the sulfotransferases, seems to have at least half of it putative members inactivated, which doesn't make a lot of sense, because these things also seem to be under selection pressure. So they're doing something, but what?
Answer are starting to be filled in. Here's a paper from last year, on some of the possibilities, and this article from Science is an excellent survey of the field. It turns out that many of these seem to have a regulatory function, often on their enzymatically active relations. Some of these pseudoenzymes retain the ability to bind their original substrates, and those events may also have a regulatory function in their downstream protein interactions. So these things may be a whole class of drug targets that we haven't screened for - and in fact may be a set of proteins that we're already hitting with some of our ligands, but with no idea that we're doing so. I doubt if anyone in drug discovery has ever bothered counterscreening against any of them, but it looks like that should change. Update: I stand corrected. See the comment thread for more.
This illustrates a few principles worth keeping in mind: first, that if something is under selection pressure, it surely has a function, even if you can't figure out how or why. (A corollary is that if some sequence doesn't seem to be under such constraints, it probably doesn't have much of a function at all, but as those links show, this is a contentious topic). Next, we should always keep in mind that we don't really know as much about cell biology as we think we do; there are lots of surprises and overlooked things waiting for us. And finally, any of those that appear to have (or retain) small-molecule binding sites are very much worth the attention of medicinal chemists, because so many other possible targets have nothing of the kind, and are a lot harder to deal with.
+ TrackBacks (0) | Category: Biological News
April 18, 2013
I've linked to some very skeptical takes on the ENCODE project, the effort that supposedly identified 80% of our DNA sequence as functional to some degree. I should present some evidence for the other side, though, as it comes up, and some may have come up.
Two recent papers in Cell tell the story. The first proposes "super-enhancers" as regulators of gene transcription. (Here's a brief summary of both). These are clusters of known enhancer sequences, which seem to recruit piles of transcription factors, and act differently from the single-enhancer model. The authors show evidence that these are involved in cell differentiation, and could well provide one of the key systems for determining eventual cellular identity from pluripotent stem cells.
Interest in further understanding the importance of Mediator in ESCs led us to further investigate enhancers bound by the master transcription factors and Mediator in these cells. We found that much of enhancer-associated Mediator occupies exceptionally large enhancer domains and that these domains are associated with genes that play prominent roles in ESC biology. These large domains, or super-enhancers, were found to contain high levels of the key ESC transcription factors Oct4, Sox2, Nanog, Klf4, and Esrrb to stimulate higher transcriptional activity than typical enhancers and to be exceptionally sensitive to reduced levels of Mediator. Super-enhancers were found in a wide variety of differentiated cell types, again associated with key cell-type-specific genes known to play prominent roles in control of their gene expression program
On one level, this is quite interesting, because cellular differentiation is a process that we really need to know a lot more about (the medical applications are enormous). But as a medicinal chemist, this sort of news sort of makes me purse my lips, because we have enough trouble dealing with the good old fashioned transcription factors (whose complexes of proteins were already large enough, thank you). What role there might be for therapeutic intervention in these super-complexes, I couldn't say.
The second paper has more on this concept. They find that these "super-enhancers" are also important in tumor cells (which would make perfect sense), and that they tie into two other big stories in the field, the epigenetic regulator BRD4 and the multifunctional protein cMyc:
Here, we investigate how inhibition of the widely expressed transcriptional coactivator BRD4 leads to selective inhibition of the MYC oncogene in multiple myeloma (MM). BRD4 and Mediator were found to co-occupy thousands of enhancers associated with active genes. They also co-occupied a small set of exceptionally large super-enhancers associated with genes that feature prominently in MM biology, including the MYC oncogene. Treatment of MM tumor cells with the BET-bromodomain inhibitor JQ1 led to preferential loss of BRD4 at super-enhancers and consequent transcription elongation defects that preferentially impacted genes with super-enhancers, including MYC. Super-enhancers were found at key oncogenic drivers in many other tumor cells.
About 3% of the enhancers found in the multiple myeloma cell line turned out to be tenfold-larger super-enhancer complexes, which bring in about ten times as much BRD4. It's been recently discovered that small-molecule ligands for BRD4 have a large effect on the cMyc pathway, and now we may know one of the ways that happens. So that might be part of the answer to the question I posed above: how do you target these things with drugs? Find one of the proteins that it has to recruit in large numbers, and mess up its activity at a small-molecule binding site. And if these giant complexes are even more sensitive to disruptions in these key proteins than usual (as the paper hypothesizes), then so much the better.
It's fortunate that chromatin-remodeling proteins such as BRD4 are (at least in some cases) filling that role, because they have pretty well-defining binding pockets that we can target. Direct targeting of cMyc, by contrast, has been quite difficult indeed (here's a new paper with some background on what's been accomplished so far).
Now, to the level of my cell biology expertise, the evidence that these papers have looks reasonably good. I'm certainly willing to believe that there are levels of transcriptional control beyond those that we've realized so far, weary sighs of a chemist aside. But I'll be interested to see the arguments over this concept play out. For example, if these very long stretches of DNA turn out indeed to be so important, how sensitive are they to mutation? One of the key objections to the ENCODE consortium's interpretation of their data is that much of what they're calling "functional" DNA seems to have little trouble drifting along and picking up random mutations. It will be worth applying this analysis to these super-regulators, but I haven't seen that done yet.
+ TrackBacks (0) | Category: Biological News | Cancer
March 22, 2013
I've written a couple of times about the work at the University of Pennsylvania on modified T-cell therapy for leukemia (CLL). Now comes word that a different version of this approach seems to be working at Sloan-Kettering. Recurrent B-cell acute lymphoblastic leukemia (B-ALL) has been targeted there, and it's generally a more aggressive disease than CLL.
As with the Penn CLL studies, when this technique works, it can be dramatic:
One of the sickest patients in the study was David Aponte, 58, who works on a sound crew for ABC News. In November 2011, what he thought was a bad case of tennis elbow turned out to be leukemia. He braced himself for a long, grueling regimen of chemotherapy.
Brentjens suggested that before starting the drugs, Aponte might want to have some of his T-cells stored (chemotherapy would deplete them). That way, if he relapsed, he might be able to enter a study using the cells. Aponte agreed.
At first, the chemo worked, but by summer 2012, while he was still being treated, tests showed the disease was back.
“After everything I had gone through, the chemo, losing hair, the sickness, it was absolutely devastating,’’ Aponte recalled.
He joined the T-cell study. For a few days, nothing seemed to be happening. But then his temperature began to rise. He has no memory of what happened for the next week or so, but the journal article — where he is patient 5 — reports that his fever spiked to 105 degrees.
He was in the throes of a ‘‘cytokine storm,’’ meaning that the T-cells, in a furious battle with the cancer, were churning out enormous amounts of hormones called cytokines. Besides fever, the hormonal rush can make a patient’s blood pressure plummet and his heart rate shoot up. Aponte was taken to intensive care and treated with steroids to quell the reaction.
Eight days later, his leukemia was gone
He and the other patients in the study all received bone marrow transplantations after the treatment, and are considered cured - which is remarkable, since they were all relapsed/refractory, and thus basically at death's door. These stories sound like the ones from the early days of antibiotics, with the important difference that resistance to drug therapy doesn't spread through the world's population of cancer cells. The modified T-cell approach has already gotten a lot of attention, and this is surely going to speed things up even more. I look forward to the first use of it for a non-blood-cell tumor (which appears to be in the works) and to further refinements in generating the cells themselves.
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials
March 21, 2013
AstraZeneca has announced another 2300 job cuts, this time in sales and administration. That's not too much of a surprise, as the cuts announced recently in R&D make it clear that the company is determined to get smaller. But their overall R&D strategy is still unclear, other than "We can't go on like this", which is clear enough.
One interesting item has just come out, though. The company has done a deal with Moderna Therapeutics of Cambridge (US), a relatively new outfit that's trying something that (as far as I know) no one else has had the nerve to. Moderna is trying to use messenger RNAs as therapies, to stimulate the body's own cells to produce more of some desired protein product. This is the flip side of antisense and RNA interference, where you throw a wrench into the transcription/translation machinery to cut down on some protein. Moderna's trying to make the wheels spin in the other direction.
This is the sort of idea that makes me feel as if there are two people inhabiting my head. One side of me is very excited and interested to see if this approach will work, and the other side is very glad that I'm not one of the people being asked to do it. I've always thought that messing up or blocking some process was an easier task than making it do the right thing (only more so), and in this case, we haven't even reliably shown that blocking such RNA pathways is a good way to a therapy.
I also wonder about the disease areas that such a therapy would treat, and how amenable they are to the approach. The first one that occurs to a person is "Allow Type I diabetics to produce their own insulin", but if your islet cells have been disrupted or killed off, how is that going to work? Will other cell types recognize the mRNA-type molecules you're giving, and make some insulin themselves? If they do, what sort of physiological control will they be under? Beta-cells, after all, are involved in a lot of complicated signaling to tell them when to make insulin and when to lay off. I can also imagine this technique being used for a number of genetic disorders, where we know what the defective protein is and what it's supposed to be. But again, how does the mRNA get to the right tissues at the right time? Protein expression is under so many constraints and controls that it seems almost foolhardy to think that you could step in, dump some mRNA on the process, and get things to work the way that you want them to.
But all that said, there's no substitute for trying it out. And the people behind Moderna are not fools, either, so you can be sure that these questions (and many more) have crossed their minds already. (The company's press materials claim that they've addressed the cellular-specificity problem, for example). They've gotten a very favorable deal from AstraZeneca - admittedly a rather desperate company - but good enough that they must have a rather convincing story to tell with their internal data. This is the very picture of a high-risk, high-reward approach, and I wish them success with it. A lot of people will be watching very closely.
+ TrackBacks (0) | Category: Biological News | Business and Markets | Drug Development
March 15, 2013
There's another paper out expressing worries about the interpretation of the ENCODE data. (For the last round, see here). The wave of such publications seems to be largely a function of how quickly the various authors could assemble their manuscripts, and how quickly the review process has worked at the various journals. You get the impression that a lot of people opened up new word processor windows and started typing furiously right after all the press releases last fall.
This one, from W. Ford Doolittle at Dalhousie, explicitly raises a thought experiment that I think has occurred to many critics of the ENCODE effort. (In fact, it's the very one that showed up in a comment here to the last post I did on the subject). Here's how it goes: The expensive, toxic, only-from-licensed-sushi-chefs pufferfish (Takifugu rubripes) has about 365 million base pairs, with famously little of it looking like junk. By contrast, the marbled lungfish (Protopterus aethiopicus) has a humungous genome, 133 billion base pairs, which is apparently enough to code for three hundred different puffer fish with room to spare. Needless to say, the lungfish sequence features vast stretches of apparent junk DNA. Or does it need saying? If an ENCODE-style effort had used the marbled lungfish instead of humans as its template, would it have told us that 80% of its genome was functional? If it had done the pufferfish simultaneously, what would it have said about the difference between the two?
I'm glad that the new PNAS paper lays this out, because to my mind, that's a damned good question. One ENCODE-friendly answer is that the marbled lungfish has been under evolutionary pressure that the fugu pufferfish hasn't, and that it needs many more regulatory elements, spacers, and so on. But that, while not impossible, seems to be assuming the conclusion a bit too much. We can't look at a genome, decide that whatever we see is good and useful just because it's there, and then work out what its function must then be. That seems a bit too Panglossian: all is for the best in the best of all possible genomes, and if a lungfish needs one three hundreds times larger than the fugu fish, well, it must be three hundred times harder to be a lungfish? Such a disparity between the genomes of two organisms, both of them (to a first approximation) running the "fish program", could also be explained by there being little evolutionary pressure against filling your DNA sequence with old phone books.
Here's an editorial at Nature about this new paper:
There is a valuable and genuine debate here. To define what, if anything, the billions of non-protein-coding base pairs in the human genome do, and how they affect cellular and system-level processes, remains an important, open and debatable question. Ironically, it is a question that the language of the current debate may detract from. As Ewan Birney, co-director of the ENCODE project, noted on his blog: “Hindsight is a cruel and wonderful thing, and probably we could have achieved the same thing without generating this unneeded, confusing discussion on what we meant and how we said it”
He's right - the ENCODE team could have presented their results differently, but doing that would not have made a gigantic splash in the world press. There wouldn't have been dozens of headlines proclaiming the "end of junk DNA" and the news that 80% of the genome is functional. "Scientists unload huge pile of genomic data analysis" doesn't have the same zing. And there wouldn't have been the response inside the industry that has, in fact, occurred. This comment from my first blog post on the subject is still very much worth keeping in mind:
With my science hat on I love this stuff, stepping into the unknown, finding stuff out. With my pragmatic, applied science, hard-nosed Drug Discovery hat on, I know that it is not going to deliver over the time frame of any investment we can afford to make, so we should stay away.
However, in my big Pharma, senior leaders are already jumping up and down, fighting over who is going to lead the new initiative in this exciting new area, who is going to set up a new group, get new resources, set up collaborations, get promoted etc. Oh, and deliver candidates within 3 years.
Our response to new basic science is dumb and we are failing our investors and patients. And we don't learn.
+ TrackBacks (0) | Category: Biological News
March 7, 2013
Every so often I've mentioned some of the work being done with atomic force microscopy (AFM), and how it might apply to medicinal chemistry. It's been used to confirm a natural product structural assignment, and then there are images like these. Now comes a report of probing a binding site with the technique. The experimental setup is shown at left. The group (a mixed team from Linz, Vienna, and Berlin) reconstituted functional uncoupling protein 1 (UCP1) in a lipid bilayer on a mica surface. Then they ran two different kinds of ATM tips across them - one with an ATP molecule attached, and another with an anti-UCP1 antibody, and with different tether links on them as well.
What they found was that ATP seems to be able to bind to either side of the protein (some of the UCPs in the bilayer were upside down). There also appears to be only one nucleotide binding site per UCP (in accordance with the sequence). That site is about 1.27 nM down into the central pore, which could well be a particular residue (R182) that is thought to protrude into the pore space. Interestingly, although ATP can bind while coming in from either direction, it has to go in deeper from one side than the other (which shows up in the measurements with different tether lengths). And the leads to the hypothesis that the deeper-binding mode sets off conformational changes in the protein that the shallow-binding mode doesn't - which could explain how the protein is able to function while its cytosolic side is being exposed to high concentrations of ATP.
For some reason, these sorts of direct physical measurements weird me out more than spectroscopic studies. Shining light or X-rays into something (or putting it into a magnetic field) just seems more removed. But a single molecule on an AFM tip seems, when a person's hand is on the dial, to somehow be the equivalent of a long, thin stick that we're using to poke the atomic-level structure. What can I say; a vivid imagination is no particular handicap in this business!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
February 25, 2013
Last fall we had the landslide of data from the ENCODE project, along with a similar landslide of headlines proclaiming that 80% of the human genome was functional. That link shows that many people (myself included) were skeptical of this conclusion at the time, and since then others have weighed in with their own doubts.
A new paper, from Dan Graur at Houston (and co-authors from Houston and Johns Hopkins) is really stirring things up. And whether you agree with its authors or not, it's well worth reading - you just don't see thunderous dissents like this one in the scientific literature very often. Here, try this out:
Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that (at least 70%) of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect.
Other than that, things are fine. The paper goes on to detailed objections in each of those categories, and the tone does not moderate. One of the biggest objections is around the use of the word "function". The authors are at pains to distinguish selected effect functions from causal role functions, and claim that one of the biggest shortcomings of the ENCODE claims is that they blur this boundary. "Selected effects" are what most of us think about as well-proven functions: a TATAAA sequence in the genome binds a transcription factor, with effects on the gene(s) downstream of it. If there is a mutation in this sequence, there will almost certainly be functional consequences (and these will almost certainly be bad). If, however, imagine a random sequence of nucelotides that's close enough to TATAAA to bind a transcription factor. But in this case, there are no functional consequences - genes aren't transcribed differently, and nothing really happens other than the transcription factor parking there once in a while. That's a "causal role" function, and the whopping majority of the ENCODE functions appear to be in this class. "It looks sort of like something that has a function, therefore it has one". And while this can lead to discoveries, you have to be careful:
The causal role concept of function can lead to bizarre outcomes in the biological sciences. For example, while the selected effect function of the heart can be stated unambiguously to be the pumping of blood, the heart may be assigned many additional causal role functions, such as adding 300 grams to body weight, producing sounds, and preventing the pericardium from deflating onto itself. As a result, most biologists use the selected effect concept of function. . .
A mutation in that random TATAAA-like sequence would be expected to be silent compared to what would happen in a real binding motif. So one would want to know what percent of the genome is under selection pressure - that is, what part of it is unlikely to be mutatable without something happening. Those studies are where we get the figures of perhaps 10% of the DNA sequence being functional. Almost all of what ENCODE has declared to be functional, though, can show mutations with relative impunity:
From an evolutionary viewpoint, a function can be assigned to a DNA sequence if and only if it is possible to destroy it. All functional entities in the universe can be rendered nonfunctional by the ravages of time, entropy, mutation, and what have you. Unless a genomic functionality is actively protected by selection, it will accumulate deleterious mutations and will cease to be functional. The absurd alternative, which unfortunately was adopted by ENCODE, is to assume that no deleterious mutations can ever occur in the regions they have deemed to be functional. Such an assumption is akin to claiming that a television set left on and unattended will still be in working condition after a million years because no natural events, such as rust, erosion, static electricity, and earthquakes can affect it. The convoluted rationale for the decision to discard evolutionary conservation and constraint as the arbiters of functionality put forward by a lead ENCODE author (Stamatoyannopoulos 2012) is groundless and self-serving.
Basically, if you can't destroy a function by mutation, then there is no function to destroy. Even the most liberal definitions take this principle to apply to about 15% of the genome at most, so the 80%-or-more figure really does stand out. But this paper has more than philosophical objections to the ENCODE work. They point out that the consortium used tumor cell lines for its work, and that these are notoriously permissive in their transcription. One of the principles behind the 80% figure is that "if it gets transcribed, it must have a function", but you can't say that about HeLa cells and the like, which read off all sorts of pseudogenes and such (introns, mobile DNA elements, etc.)
One of the other criteria the ENCODE studies used for assigning function was histone modification. Now, this bears on a lot of hot topics in drug discovery these days, because an awful lot of time and effort is going into such epigenetic mechanisms. But (as this paper notes), this recent study illustrated that all histone modifications are not equal - there may, in fact, be a large number of silent ones. Another ENCODE criterion had to do with open (accessible) regions of chromatin, but there's a potential problem here, too:
They also found that more than 80% of the transcription start sites were contained within open chromatin regions. In yet another breathtaking example of affirming the consequent, ENCODE makes the reverse claim, and adds all open chromatin regions to the “functional” pile, turning the mostly true statement “most transcription start sites are found within open chromatin regions” into the entirely false statement “most open chromatin regions are functional transcription start sites.”
Similar arguments apply to the 8.5% of the genome that ENCODE assigns to transcription factor binding sites. When you actually try to experimentally verify function for such things, the huge majority of them fall out. (It's also noted that there are some oddities in ENCODE's definitions here - for example, they seem to be annotating 500-base stretches as transcription factor binding sites, when most of the verified ones are below 15 bases in length).
Now, it's true that the ENCODE studies did try to address the idea of selection on all these functional sequences. But this new paper has a lot of very caustic things to say about the way this was done, and I'll refer you to it for the full picture. To give you some idea, though:
By choosing primate specific regions only, ENCODE effectively removed everything that is of interest functionally (e.g., protein coding and RNA-specifying genes as well as evolutionarily conserved regulatory regions). What was left consisted among others of dead transposable and retrotransposable elements. . .
. . .Because polymorphic sites were defined by using all three human samples, the removal of two samples had the unfortunate effect of turning some polymorphic sites into monomorphic ones. As a consequence, the ENCODE data includes 2,136 alleles each with a frequency of exactly 0. In a miraculous feat of “next generation” science, the ENCODE authors were able to determine the frequencies of nonexistent derived alleles.
That last part brings up one of the objections that many people many have to this paper - it does take on a rather bitter tone. I actually don't mind it - who am I to object, given some of the things I've said on this blog? But it could be counterproductive, leading to arguments over the insults rather than arguments over the things being insulted (and over whether they're worthy of the scorn). People could end up waving their hands and running around shouting in all the smoke, rather than figuring out how much fire there is and where it's burning. The last paragraph of the paper is a good illustration:
The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.
Well, maybe that was necessary. The amount of media hype was huge, and the only way to counter it might be to try to generate a similar amount of noise. It might be working, or starting to work - normally, a paper like this would get no popular press coverage at all. But will it make CNN? The Science section of the New York Times? ENCODE's results certainly did.
But what the general public things about this controversy is secondary. The real fight is going to be here in the sciences, and some of it is going to spill out of academia and into the drug industry. As mentioned above, a lot of companies are looking at epigenetic targets, and a lot of companies would (in general) very much like to hear that there are a lot more potential drug targets than we know about. That was what drove the genomics frenzy back in 1999-2000, an era that was not without its consequences. The coming of the ENCODE data was (for some people) the long-delayed vindication of the idea that gene sequencing was going to lead to a vast landscape of new disease targets. There was already a comment on my entry at the time suggesting that some industrial researchers were jumping on the ENCODE work as a new area to work in, and it wouldn't surprise me to see many others thinking similarly.
But we're going to have to be careful. Transcription factors and epigenetic mechanisms are hard enough to work on, even when they're carefully validated. Chasing after ephemeral ones would truly be a waste of time. . .
More reactions around the science blogging world: Wavefunction, Pharyngula, SciLogs, Openhelix. And there are (and will be) many more.
+ TrackBacks (0) | Category: Biological News
February 13, 2013
We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.
But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.
These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.
Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.
What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.
And there are a lot of potential reasons why this turns out to be so:
There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.
But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.
There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):
The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.
The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.
The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.
“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”
“That started us thinking,” he continued. “Is it the same in the mouse or not?”
What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.
We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.
+ TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases
February 12, 2013
Since I mentioned the NIH in the context of the Molecular Libraries business, I wanted to bring up something else that a reader sent along to me. There's a persistent figure that's floated whenever the agency talks about translational medicine: 4500 diseases. Here's an example:
Therapeutic development is a costly, complex and time-consuming process. In recent years, researchers have succeeded in identifying the causes of more than 4,500 diseases. But it has proven difficult to turn such knowledge into new therapies; effective treatments exist for only about 250 of these conditions.
It shows up again in this paper, just out, and elsewhere. But is it true?
Do we really know the causes of 4,500 diseases? Outside of different cancer cellular types and various infectious agents, are there even 4,500 diseases, total? And if not, how many are there, anyway, then? I ask because that figure seems rather high. There are a lot of single-point-mutation genetic disorders to which we can pretty confidently assign a cause, but some of them (cystic fibrosis, for example) are considered one disease even though they can be arrived at through a variety of mutations. Beyond that, do we really know the absolute molecular-level cause of, say, type II diabetes? (We know a lot of very strong candidates, but the interplay between them, now, there's the rub). Alzheimer's? Arthritis? Osteoporosis? Even in the cases where we have a good knowledge of what the proximate cause of the trouble is (thyroid insufficiency, say, or Type I diabetes), do we really know what brought on that state, or how to prevent it? Sometimes, but not very often, is my impression. So where does this figure come from?
The best guess is here, GeneMap. But read the fine print: "Phenotypes include single-gene mendelian disorders, traits, some susceptibilities to complex disease . . . and some somatic cell genetic disease. . ." My guess is that a lot of what's under that banner does not rise to "knowing the cause", but I'd welcome being corrected on that point.
+ TrackBacks (0) | Category: Biological News
January 30, 2013
Here are some angry views that I don't necessarily endorse, but I can't say that they're completely wrong, either. A programmer bids an angry farewell to the bioinformatics world:
Bioinformatics is an attempt to make molecular biology relevant to reality. All the molecular biologists, devoid of skills beyond those of a laboratory technician, cried out for the mathematicians and programmers to magically extract science from their mountain of shitty results.
And so the programmers descended and built giant databases where huge numbers of shitty results could be searched quickly. They wrote algorithms to organize shitty results into trees and make pretty graphs of them, and the molecular biologists carefully avoided telling the programmers the actual quality of the results. When it became obvious to everyone involved that a class of results was worthless, such as microarray data, there was a rush of handwaving about “not really quantitative, but we can draw qualitative conclusions” followed by a hasty switch to a new technique that had not yet been proved worthless.
And the databases grew, and everyone annotated their data by searching the databases, then submitted in turn. No one seems to have pointed out that this makes your database a reflection of your database, not a reflection of reality. Pull out an annotation in GenBank today and it’s not very long odds that it’s completely wrong.
That's unfair to molecular biologists, but is it unfair to the state of bioinformatic databases? Comments welcome. . .
Update: more comments on this at Ycombinator.
+ TrackBacks (0) | Category: Biological News | In Silico
January 15, 2013
Like many people, I have a weakness for "We've had it all wrong!" explanations. Here's another one, or part of one: is obesity an infectious disease?
During our clinical studies, we found that Enterobacter, a genus of opportunistic, endotoxin-producing pathogens, made up 35% of the gut bacteria in a morbidly obese volunteer (weight 174.8 kg, body mass index 58.8 kg m−2) suffering from diabetes, hypertension and other serious metabolic deteriorations. . .
. . .After 9 weeks on (a special diet), this Enterobacter population in the volunteer's gut reduced to 1.8%, and became undetectable by the end of the 23-week trial, as shown in the clone library analysis. The serum–endotoxin load, measured as LPS-binding protein, dropped markedly during weight loss, along with substantial improvement of inflammation, decreased level of interleukin-6 and increased adiponectin. Metagenomic sequencing of the volunteer's fecal samples at 0, 9 and 23 weeks on the WTP diet confirmed that during weight loss, the Enterobacteriaceae family was the most significantly reduced population. . .
They went on to do the full Koch workup, by taking an isolated Enterobacter strain from the human patient and introducing it into gnotobiotic (germ-free) mice. These mice are usually somewhat resistant to becoming obese on a high-fat diet, but after being inoculated with the bacterial sample, they put on substantial weight, became insulin resistant, and showed numerous (consistent) alterations in their lipid and glucose handling pathways. Interestingly, the germ-free mice that were inoculated with bacteria and fed normal chow did not show these effects.
The hypothesis is that the endotoxin-producing bacteria are causing a low-grade chronic inflammation in the gut, which is exacerbated to a more systemic form by the handling of excess lipids and fatty acids. The endotoxin itself may be swept up in the chylomicrons and translocated through the gut wall. The summary:
. . .This work suggests that the overgrowth of an endotoxin-producing gut bacterium is a contributing factor to, rather than a consequence of, the metabolic deteriorations in its human host. In fact, this strain B29 is probably not the only contributor to human obesity in vivo, and its relative contribution needs to be assessed. Nevertheless, by following the protocol established in this study, we hope to identify more such obesity-inducing bacteria from various human populations, gain a better understanding of the molecular mechanisms of their interactions with other members of the gut microbiota, diet and host for obesity, and develop new strategies for reducing the devastating epidemic of metabolic diseases.
Considering the bacterial origin of ulcers, I think this is a theory that needs to be taken seriously, and I'm glad to see it getting checked out. We've been hearing a lot the last few years about the interaction between human physiology and our associated bacterial population, but the attention is deserved. The problem is, we're only beginning to understand what these ecosystems are like, how they can be disordered, and what the consequences are. Anyone telling you that they have it figured out at this point is probably trying to sell you something. It's worth the time to figure out, though. . .
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases
January 14, 2013
Picking up on that reactive oxygen species (ROS) business from the other day (James Watson's paper suggesting that it could be a key anticancer pathway), I wanted to mention this new paper, called to my attention this morning by a reader. It's from a group at Manchester studying regeneration of tissue in Xenopus tadpoles, and they note high levels of intracellular hydrogen peroxide in the regenerating tissue. Moreover, antioxidant treatment impaired the regeneration, as did genetic manipulation of ROS generation.
Now, inflammatory cells are known to produce plenty of ROS, and they're also involved in tissue injury. But that doesn't seem to be quite the connection here, because the tissue ROS levels peaked before the recruitment of such cells did. (This is consistent with previous work in zebrafish, which also showed hydrogen peroxide as an essential signal in wound healing). The Manchester group was able to genetically impair ROS generation by knocking down a protein in the NOX enzyme complex, a major source of ROS production. This also impaired regeneration, an effect that could be reversed by a rescue competition experiment.
Further experiments implicated Wnt/bet-catenin signaling in this process, which is certainly plausible, given the position of that cascade in cellular processes. That also ties in with a 2006 report of hydrogen peroxide signaling through this pathway (via a protein called nucleoredoxin.
You can see where this work is going, and so can the authors:
. . .our work suggests that increased production of ROS plays a critical role in facilitating Wnt signalling following injury, and therefore allows the regeneration program to commence. Given the ubiquitous role of Wnt signalling in regenerative events, this finding is intriguing as it might provide a general mechanism for injury-induced Wnt signalling activation across all regeneration systems, and furthermore, manipulating ROS may provide a means to induce the activation of a regenerative program in those cases where regeneration is normally limited.
Most of us reading this site belong to one of those regeneration-limited species, but perhaps it doesn't always have to be this way? Taken together, it does indeed look like (1) ROS (hydrogen peroxide among others) are important intracellular signaling molecules (which conclusion has been clear for some time now), and (2) the pathways involved are crucial growth and regulatory ones, relating to apoptosis, wound healing, cancer, the effects of exercise, all very nontrivial things indeed, and (3) these pathways would appear to be very high-value ones for pharmaceutical intervention (stay tuned).
As a side note, Paracelsus has once again been reaffirmed: the dose does indeed make the poison, as does its timing and location. Water can drown you, oxygen can help burn you, but both of them keep you alive.
+ TrackBacks (0) | Category: Biological News
January 11, 2013
The line under James Watson's name reads, of course, "Co-discoverer of DNA's structure. Nobel Prize". But it could also read "Provocateur", since he's been pretty good at that over the years. He seems to have the right personality for it - both The Double Helix (fancy new edition there) and its notorious follow-up volume Avoid Boring People illustrate the point. There are any number of people who've interacted with him over the years who can't stand the guy.
But it would be a simpler world if everyone that we found hard to take was wrong about everything, wouldn't it? I bring this up because Watson has published an article, again deliberately provocative, called "Oxidants, Antioxidants, and the Current Incurability of Metastatic Cancers". Here's the thesis:
The vast majority of all agents used to directly kill cancer cells (ionizing radiation, most chemotherapeutic agents and some targeted therapies) work through either directly or indirectly generating reactive oxygen species that block key steps in the cell cycle. As mesenchymal cancers evolve from their epithelial cell progenitors, they almost inevitably possess much-heightened amounts of antioxidants that effectively block otherwise highly effective oxidant therapies.
The article is interesting throughout, but can fairly be described as "rambling". He starts with details of the complexity of cancerous mutations, which is a topic that's come up around here several times (as it does wherever potential cancer therapies are discussed, at least by people with some idea of what they're talking about). Watson is paying particular attention here to mesenchymal tumors:
Resistance to gene-targeted anti-cancer drugs also comes about as a consequence of the radical changes in underlying patterns of gene expression that accompany the epithelial-to-mesenchymal cell transitions (EMTs) that cancer cells undergo when their surrounding environments become hypoxic . EMTs generate free-floating mesenchymal cells whose flexible shapes and still high ATP-generating potential give them the capacity for amoeboid cell-like movements that let them metastasize to other body locations (brain, liver, lungs). Only when they have so moved do most cancers become truly life-threatening. . .
. . .Unfortunately, the inherently very large number of proteins whose expression goes either up or down as the mesenchymal cancer cells move out of quiescent states into the cell cycle makes it still very tricky to know, beyond the cytokines, what other driver proteins to focus on for drug development.
That it does. He makes the case (as have others) that Myc could be one of the most important protein targets - and notes (as have others!) that drug discovery efforts against the Myc pathway have run into many difficulties. There's a good amount of discussion about BRD4 compounds as a way to target Myc. Then he gets down to the title of the paper and starts talking about reactive oxygen species (ROS). Links in the section below added by me:
That elesclomol promotes apoptosis through ROS generation raises the question whether much more, if not most, programmed cell death caused by anti-cancer therapies is also ROS-induced. Long puzzling has been why the highly oxygen sensitive ‘hypoxia-inducible transcription factor’ HIF1α is inactivated by both the, until now thought very differently acting, ‘microtubule binding’ anti-cancer taxanes such as paclitaxel and the anti-cancer DNA intercalating topoisomerases such as topotecan or doxorubicin, as well as by frame-shifting mutagens such as acriflavine. All these seemingly unrelated facts finally make sense by postulating that not only does ionizing radiation produce apoptosis through ROS but also today's most effective anti-cancer chemotherapeutic agents as well as the most efficient frame-shifting mutagens induce apoptosis through generating the synthesis of ROS. That the taxane paclitaxel generates ROS through its binding to DNA became known from experiments showing that its relative effectiveness against cancer cell lines of widely different sensitivity is inversely correlated with their respective antioxidant capacity. A common ROS-mediated way through which almost all anti-cancer agents induce apoptosis explains why cancers that become resistant to chemotherapeutic control become equally resistant to ionizing radiotherapy. . .
. . .The fact that cancer cells largely driven by RAS and Myc are among the most difficult to treat may thus often be due to their high levels of ROS-destroying antioxidants. Whether their high antioxidative level totally explains the effective incurability of pancreatic cancer remains to be shown. The fact that late-stage cancers frequently have multiple copies of RAS and MYC oncogenes strongly hints that their general incurability more than occasionally arises from high antioxidant levels.
He adduces a number of other supporting evidence for this line of thought, and then he gets to the take-home message:
For as long as I have been focused on the understanding and curing of cancer (I taught a course on Cancer at Harvard in the autumn of 1959), well-intentioned individuals have been consuming antioxidative nutritional supplements as cancer preventatives if not actual therapies. The past, most prominent scientific proponent of their value was the great Caltech chemist, Linus Pauling, who near the end of his illustrious career wrote a book with Ewan Cameron in 1979, Cancer and Vitamin C, about vitamin C's great potential as an anti-cancer agent . At the time of his death from prostate cancer in 1994, at the age of 93, Linus was taking 12 g of vitamin C every day. In light of the recent data strongly hinting that much of late-stage cancer's untreatability may arise from its possession of too many antioxidants, the time has come to seriously ask whether antioxidant use much more likely causes than prevents cancer.
All in all, the by now vast number of nutritional intervention trials using the antioxidants β-carotene, vitamin A, vitamin C, vitamin E and selenium have shown no obvious effectiveness in preventing gastrointestinal cancer nor in lengthening mortality . In fact, they seem to slightly shorten the lives of those who take them. Future data may, in fact, show that antioxidant use, particularly that of vitamin E, leads to a small number of cancers that would not have come into existence but for antioxidant supplementation. Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer.
Now this is quite interesting. The first thing I thought of when I read this was the work on ROS in exercise. This showed that taking antioxidants appeared to cancel out the benefits of exercise, probably because reactive oxygen species are the intracellular signal that sets them off. Taken together, I think we need to seriously consider whether efforts to control ROS are, in fact, completely misguided. They are, perhaps, "essential poisons", without which our cellular metabolism loses its way.
Update: I should also note the work of Joan Brugge's lab in this area, blogged about here. Taken together, you'd really have to advise against cancer patients taking antioxidants, wouldn't you?
Watson ends up the article by suggesting, none too diplomatically, that much current cancer research is misguided:
The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope . Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute's (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today's cancer research establishments. They still are too closely wedded to moving forward with cocktails of drugs targeted against the growth promoting molecules (such as HER2, RAS, RAF, MEK, ERK, PI3K, AKT and mTOR) of signal transduction pathways instead of against Myc molecules that specifically promote the cell cycle.
He singles out the Cancer Genome Atlas project as an example of this sort of thing, saying that while he initially supported it, he no longer does. It will, he maintains, tend to find mostly cancer cell "drivers" as opposed to "vulnerabilities". He's more optimistic about a big RNAi screening effort that's underway at his own Cold Spring Harbor, although he admits that this enthusiasm is "far from universally shared".
We'll find out which is the more productive approach - I'm glad that they're all running, personally, because I don' think I know enough to bet it all on one color. If Watson is right, Pfizer might be the biggest beneficiary in the drug industry - if, and it's a big if, the RNAi screening unearths druggable targets. This is going to be a long-running story - I'm sure that we'll be coming back to it again and again. . .
+ TrackBacks (0) | Category: Biological News | Cancer
December 21, 2012
This can't be good. A retraction in PNAS on some RNA-driven cell death research from a lab at Caltech:
Anomalous experimental results observed by multiple members of the Pierce lab during follow-on studies raised concerns of possible research misconduct. An investigation committee of faculty at the California Institute of Technology indicated in its final report on this matter that the preponderance of the evidence and the reasons detailed in the report established that the first author falsified and misrepresented data published in this paper. An investigation at the United States Office of Research Integrity is ongoing.
As that link from Retraction Watch notes, the first author himself was not one of the signees of that retraction statement - as one might well think - and he now appears to be living in London. He appears to have left quite a mess behind in Pasadena.
+ TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature
December 12, 2012
Rongxiang Xu is upset with this year's Nobel Prize award for stem cell research. He believes that work he did is so closely related to the subject of the prize that. . .he wants his name on it? No, apparently not. That he wants some of the prize money? Nope, not that either. That he thinks the prize was wrongly awarded? No, he's not claiming that.
What he's claiming is that the Nobel Committee has defamed his reputation as a stem cell pioneer by leaving him off, and he wants damages. Now, this is a new one, as far as I know. The closest example comes from 2003, when there was an ugly controversy over the award for NMR imaging (here's a post from the early days of this blog about it). Dr. Raymond Damadian took out strongly worded (read "hopping mad") advertisement in major newspapers claiming that the Nobel Committee had gotten the award wrong, and that he should have been on it. In vain. The Nobel Committee(s) have never backed down in such a case - although there have been some where you could make a pretty good argument - and they never will, as far as I can see.
Xu, who works in Los Angeles, is founder and chairman of the Chinese regenerative medicine company MEBO International Group. The company sells a proprietary moist-exposed burn ointment (MEBO) that induces "physiological repair and regeneration of extensively wounded skin," according to the company's website. Application of the wound ointment, along with other treatments, reportedly induces embryonic epidermal stem cells to grow in adult human skin cells. . .
. . .Xu's team allegedly awakened intact mature somatic cells to turn to pluripotent stem cells without engineering in 2000. Therefore, Xu claims, the Nobel statement undermines his accomplishments, defaming his reputation.
Now, I realize that I'm helping, in my small way, to give this guy publicity, which is one of the things he most wants out of this effort. But let me make myself clear - I'm giving him publicity in order to roll my eyes at him. I look forward to following Xu's progress through the legal system, and I'll bet his legal team looks forward to it as well, as long as things are kept on a steady payment basis.
+ TrackBacks (0) | Category: Biological News
November 8, 2012
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical Biology | Chemical News
October 10, 2012
A deserved Nobel? Absolutely. But the grousing has already started. The 2012 Nobel Pr