About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
May 22, 2013
Just how many different small-molecule binding sites are there? That's the subject of this new paper in PNAS, from Jeffrey Skolnick and Mu Gao at Georgia Tech, which several people have sent along to me in the last couple of days.
This question has a lot of bearing on questions of protein evolution. The paper's intro brings up two competing hypotheses of how protein function evolved. One, the "inherent functionality model", assumes that primitive binding pockets are a necessary consequence of protein folding, and that the effects of small molecules on these (probably quite nonspecific) motifs has been honed by evolutionary pressures since then. (The wellspring of this idea is this paper from 1976, by Jensen, and this paper will give you an overview of the field). The other way it might have worked, the "acquired functionality model", would be the case if proteins tend, in their "unevolved" states, to be more spherical, in which case binding events must have been much more rare, but also much more significant. In that system, the very existence of binding pockets themselves is what's under the most evolutionary pressure.
The Skolnick paper references this work from the Hecht group at Princeton, which already provides evidence for the first model. In that paper, a set of near-random 4-helical-bundle proteins was produced in E. coli - the only patterning was a rough polar/nonpolar alternation in amino acid residues. Nonetheless, many members of this unplanned family showed real levels of binding to things like heme, and many even showed above-background levels of several types of enzymatic activity.
In this new work, Skolnick and Gao produce a computational set of artificial proteins (called the ART library in the text), made up of nothing but poly-leucine. These were modeled to the secondary structure of known proteins in the PDB, to produce natural-ish proteins (from a broad structural point of view) that have no functional side chain residues themselves. Nonetheless, they found that the small-molecule-sized pockets of the ART set actually match up quite well with those found in real proteins. But here's where my technical competence begins to run out, because I'm not sure that I understand what "match up quite well" really means here. (If you can read through this earlier paper of theirs at speed, you're doing better than I can). The current work says that "Given two input pockets, a template and a target, (our algorithm) evaluates their PS-score, which measures the similarity in their backbone geometries, side-chain orientations, and the chemical similarities between the aligned pocket-lining residues." And that's fine, but what I don't know is how well it does that. I can see poly-Leu giving you pretty standard backbone geometries and side-chain orientations (although isn't leucine a little more likely than average to form alpha-helices?), but when we start talking chemical similarities between the pocket-lining residues, well, how can that be?
But I'm even willing to go along with the main point of the paper, which is that there are not-so-many types of small-molecule binding pockets, even if I'm not so sure about their estimate of how many there are. For the record, they're guessing not many more than about 500. And while that seems low to me, it all depends on what we mean by "similar". I'm a medicinal chemist, someone who's used to seeing "magic methyl effects" where very small changes in ligand structure can make big differences in binding to a protein. And that makes me think that I could probably take a set of binding pockets that Skolnick's people would call so similar as to be basically identical, and still find small molecules that would differentiate them. In fact, that's a big part of my job.
But in general, I see the point they're making, but it's one that I've already internalized. There are a finite number of proteins in the human body. Fifty thousand? A couple of hundred thousand? Probably not a million. Not all of these have small-molecule binding sites, for sure, so there's a smaller set to deal with right there. Even if those binding sites were completely different from one another, we'd be looking at a set of binding pockets in the thousands/tens of thousands range, most likely. But they're not completely different, as any medicinal chemist knows: try to make a selective muscarinic agonist, or a really targeted serine hydrolase inhibitor, and you'll learn that lesson quickly. And anyone who's run their drug lead through a big selectivity panel has seen the sorts of off-target activities that come up: you hit someof the other members of your target's family to greater or lesser degree. You hit the flippin' sigma receptor, not that anyone knows what that means. You hit the hERG channel, and good luck to you then. Your compound is a substrate for one of the CYP enzymes, or it binds tightly to serum albumin. Who has even seen a compound that binds only to its putative target? And this is only with the counterscreens we have, which is a small subset of the things that are really out there in cells.
And that takes me to my main objection to this paper. As I say, I'm willing to stipulate, gladly, that there are only so many types of binding pockets in this world (although I think that it's more than 500). But this sort of thing is what I have a problem with:
". . .we conclude that ligand-binding promiscuity is likely an inherent feature resulting from the geometric and physical–chemical properties of proteins. This promiscuity implies that the notion of one molecule–one protein target that underlies many aspects of drug discovery is likely incorrect, a conclusion consistent with recent studies. Moreover, within a cell, a given endogenous ligand likely interacts at low levels with multiple proteins that may have different global structures.
"Many aspects of drug discovery" assume that we're only hitting one target? Come on down and try that line out in a drug company, and be prepared for rude comments. Believe me, we all know that our compounds hit other things, and we all know that we don't even know the tenth of it. This is a straw man; I don't know of anyone doing drug discovery that has ever believed anything else. Besides, there are whole fields (CNS) where polypharmacy is assumed, and even encouraged. But even when we're targeting single proteins, believe me, no one is naive enough to think that we're hitting those alone.
Other aspects of this paper, though, are fine by me. As the authors point out, this sort of thing has implications for drawing evolutionary family trees of proteins - we should not assume too much when we see similar binding pockets, since these may well have a better chance of being coincidence than we think. And there are also implications for origin-of-life studies: this work (and the other work in the field, cited above) imply that a random collection of proteins could still display a variety of functions. Whether these are good enough to start assembling a primitive living system is another question, but it may be that proteinaceous life has an easier time bootstrapping itself than we might imagine.
+ TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It
May 15, 2013
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Chemical News | Drug Assays
May 13, 2013
I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.
Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)
But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.
That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?
+ TrackBacks (0) | Category: Biological News
April 25, 2013
A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.
When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.
So far, so good. But here's where, conceptually, things start going into the ditch:
Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.
Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:
Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .
. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.
Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".
What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.
I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.
+ TrackBacks (0) | Category: Biological News
April 23, 2013
Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.
Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:
Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.
And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).
One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.
And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.
Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History
There's a possible new area for drug discovery that's coming from a very unexpected source: enzymes that don't do anything. About ten years ago, when the human genome was getting its first good combing-through, one of the first enzyme categories to get the full treatment were the kinases. But about ten per cent of them, on closer inspection, seemed to lack one or more key catalytic residues, leaving them with no known way to be active. They were dubbed (with much puzzlement) "pseudokinases", with their functions, if any, unknown.
As time went on and sequences piled up, the same situation was found for a number of other enzyme categories. One family in particular, the sulfotransferases, seems to have at least half of it putative members inactivated, which doesn't make a lot of sense, because these things also seem to be under selection pressure. So they're doing something, but what?
Answer are starting to be filled in. Here's a paper from last year, on some of the possibilities, and this article from Science is an excellent survey of the field. It turns out that many of these seem to have a regulatory function, often on their enzymatically active relations. Some of these pseudoenzymes retain the ability to bind their original substrates, and those events may also have a regulatory function in their downstream protein interactions. So these things may be a whole class of drug targets that we haven't screened for - and in fact may be a set of proteins that we're already hitting with some of our ligands, but with no idea that we're doing so. I doubt if anyone in drug discovery has ever bothered counterscreening against any of them, but it looks like that should change. Update: I stand corrected. See the comment thread for more.
This illustrates a few principles worth keeping in mind: first, that if something is under selection pressure, it surely has a function, even if you can't figure out how or why. (A corollary is that if some sequence doesn't seem to be under such constraints, it probably doesn't have much of a function at all, but as those links show, this is a contentious topic). Next, we should always keep in mind that we don't really know as much about cell biology as we think we do; there are lots of surprises and overlooked things waiting for us. And finally, any of those that appear to have (or retain) small-molecule binding sites are very much worth the attention of medicinal chemists, because so many other possible targets have nothing of the kind, and are a lot harder to deal with.
+ TrackBacks (0) | Category: Biological News
April 18, 2013
I've linked to some very skeptical takes on the ENCODE project, the effort that supposedly identified 80% of our DNA sequence as functional to some degree. I should present some evidence for the other side, though, as it comes up, and some may have come up.
Two recent papers in Cell tell the story. The first proposes "super-enhancers" as regulators of gene transcription. (Here's a brief summary of both). These are clusters of known enhancer sequences, which seem to recruit piles of transcription factors, and act differently from the single-enhancer model. The authors show evidence that these are involved in cell differentiation, and could well provide one of the key systems for determining eventual cellular identity from pluripotent stem cells.
Interest in further understanding the importance of Mediator in ESCs led us to further investigate enhancers bound by the master transcription factors and Mediator in these cells. We found that much of enhancer-associated Mediator occupies exceptionally large enhancer domains and that these domains are associated with genes that play prominent roles in ESC biology. These large domains, or super-enhancers, were found to contain high levels of the key ESC transcription factors Oct4, Sox2, Nanog, Klf4, and Esrrb to stimulate higher transcriptional activity than typical enhancers and to be exceptionally sensitive to reduced levels of Mediator. Super-enhancers were found in a wide variety of differentiated cell types, again associated with key cell-type-specific genes known to play prominent roles in control of their gene expression program
On one level, this is quite interesting, because cellular differentiation is a process that we really need to know a lot more about (the medical applications are enormous). But as a medicinal chemist, this sort of news sort of makes me purse my lips, because we have enough trouble dealing with the good old fashioned transcription factors (whose complexes of proteins were already large enough, thank you). What role there might be for therapeutic intervention in these super-complexes, I couldn't say.
The second paper has more on this concept. They find that these "super-enhancers" are also important in tumor cells (which would make perfect sense), and that they tie into two other big stories in the field, the epigenetic regulator BRD4 and the multifunctional protein cMyc:
Here, we investigate how inhibition of the widely expressed transcriptional coactivator BRD4 leads to selective inhibition of the MYC oncogene in multiple myeloma (MM). BRD4 and Mediator were found to co-occupy thousands of enhancers associated with active genes. They also co-occupied a small set of exceptionally large super-enhancers associated with genes that feature prominently in MM biology, including the MYC oncogene. Treatment of MM tumor cells with the BET-bromodomain inhibitor JQ1 led to preferential loss of BRD4 at super-enhancers and consequent transcription elongation defects that preferentially impacted genes with super-enhancers, including MYC. Super-enhancers were found at key oncogenic drivers in many other tumor cells.
About 3% of the enhancers found in the multiple myeloma cell line turned out to be tenfold-larger super-enhancer complexes, which bring in about ten times as much BRD4. It's been recently discovered that small-molecule ligands for BRD4 have a large effect on the cMyc pathway, and now we may know one of the ways that happens. So that might be part of the answer to the question I posed above: how do you target these things with drugs? Find one of the proteins that it has to recruit in large numbers, and mess up its activity at a small-molecule binding site. And if these giant complexes are even more sensitive to disruptions in these key proteins than usual (as the paper hypothesizes), then so much the better.
It's fortunate that chromatin-remodeling proteins such as BRD4 are (at least in some cases) filling that role, because they have pretty well-defining binding pockets that we can target. Direct targeting of cMyc, by contrast, has been quite difficult indeed (here's a new paper with some background on what's been accomplished so far).
Now, to the level of my cell biology expertise, the evidence that these papers have looks reasonably good. I'm certainly willing to believe that there are levels of transcriptional control beyond those that we've realized so far, weary sighs of a chemist aside. But I'll be interested to see the arguments over this concept play out. For example, if these very long stretches of DNA turn out indeed to be so important, how sensitive are they to mutation? One of the key objections to the ENCODE consortium's interpretation of their data is that much of what they're calling "functional" DNA seems to have little trouble drifting along and picking up random mutations. It will be worth applying this analysis to these super-regulators, but I haven't seen that done yet.
+ TrackBacks (0) | Category: Biological News | Cancer
March 22, 2013
I've written a couple of times about the work at the University of Pennsylvania on modified T-cell therapy for leukemia (CLL). Now comes word that a different version of this approach seems to be working at Sloan-Kettering. Recurrent B-cell acute lymphoblastic leukemia (B-ALL) has been targeted there, and it's generally a more aggressive disease than CLL.
As with the Penn CLL studies, when this technique works, it can be dramatic:
One of the sickest patients in the study was David Aponte, 58, who works on a sound crew for ABC News. In November 2011, what he thought was a bad case of tennis elbow turned out to be leukemia. He braced himself for a long, grueling regimen of chemotherapy.
Brentjens suggested that before starting the drugs, Aponte might want to have some of his T-cells stored (chemotherapy would deplete them). That way, if he relapsed, he might be able to enter a study using the cells. Aponte agreed.
At first, the chemo worked, but by summer 2012, while he was still being treated, tests showed the disease was back.
“After everything I had gone through, the chemo, losing hair, the sickness, it was absolutely devastating,’’ Aponte recalled.
He joined the T-cell study. For a few days, nothing seemed to be happening. But then his temperature began to rise. He has no memory of what happened for the next week or so, but the journal article — where he is patient 5 — reports that his fever spiked to 105 degrees.
He was in the throes of a ‘‘cytokine storm,’’ meaning that the T-cells, in a furious battle with the cancer, were churning out enormous amounts of hormones called cytokines. Besides fever, the hormonal rush can make a patient’s blood pressure plummet and his heart rate shoot up. Aponte was taken to intensive care and treated with steroids to quell the reaction.
Eight days later, his leukemia was gone
He and the other patients in the study all received bone marrow transplantations after the treatment, and are considered cured - which is remarkable, since they were all relapsed/refractory, and thus basically at death's door. These stories sound like the ones from the early days of antibiotics, with the important difference that resistance to drug therapy doesn't spread through the world's population of cancer cells. The modified T-cell approach has already gotten a lot of attention, and this is surely going to speed things up even more. I look forward to the first use of it for a non-blood-cell tumor (which appears to be in the works) and to further refinements in generating the cells themselves.
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials
March 21, 2013
AstraZeneca has announced another 2300 job cuts, this time in sales and administration. That's not too much of a surprise, as the cuts announced recently in R&D make it clear that the company is determined to get smaller. But their overall R&D strategy is still unclear, other than "We can't go on like this", which is clear enough.
One interesting item has just come out, though. The company has done a deal with Moderna Therapeutics of Cambridge (US), a relatively new outfit that's trying something that (as far as I know) no one else has had the nerve to. Moderna is trying to use messenger RNAs as therapies, to stimulate the body's own cells to produce more of some desired protein product. This is the flip side of antisense and RNA interference, where you throw a wrench into the transcription/translation machinery to cut down on some protein. Moderna's trying to make the wheels spin in the other direction.
This is the sort of idea that makes me feel as if there are two people inhabiting my head. One side of me is very excited and interested to see if this approach will work, and the other side is very glad that I'm not one of the people being asked to do it. I've always thought that messing up or blocking some process was an easier task than making it do the right thing (only more so), and in this case, we haven't even reliably shown that blocking such RNA pathways is a good way to a therapy.
I also wonder about the disease areas that such a therapy would treat, and how amenable they are to the approach. The first one that occurs to a person is "Allow Type I diabetics to produce their own insulin", but if your islet cells have been disrupted or killed off, how is that going to work? Will other cell types recognize the mRNA-type molecules you're giving, and make some insulin themselves? If they do, what sort of physiological control will they be under? Beta-cells, after all, are involved in a lot of complicated signaling to tell them when to make insulin and when to lay off. I can also imagine this technique being used for a number of genetic disorders, where we know what the defective protein is and what it's supposed to be. But again, how does the mRNA get to the right tissues at the right time? Protein expression is under so many constraints and controls that it seems almost foolhardy to think that you could step in, dump some mRNA on the process, and get things to work the way that you want them to.
But all that said, there's no substitute for trying it out. And the people behind Moderna are not fools, either, so you can be sure that these questions (and many more) have crossed their minds already. (The company's press materials claim that they've addressed the cellular-specificity problem, for example). They've gotten a very favorable deal from AstraZeneca - admittedly a rather desperate company - but good enough that they must have a rather convincing story to tell with their internal data. This is the very picture of a high-risk, high-reward approach, and I wish them success with it. A lot of people will be watching very closely.
+ TrackBacks (0) | Category: Biological News | Business and Markets | Drug Development
March 15, 2013
There's another paper out expressing worries about the interpretation of the ENCODE data. (For the last round, see here). The wave of such publications seems to be largely a function of how quickly the various authors could assemble their manuscripts, and how quickly the review process has worked at the various journals. You get the impression that a lot of people opened up new word processor windows and started typing furiously right after all the press releases last fall.
This one, from W. Ford Doolittle at Dalhousie, explicitly raises a thought experiment that I think has occurred to many critics of the ENCODE effort. (In fact, it's the very one that showed up in a comment here to the last post I did on the subject). Here's how it goes: The expensive, toxic, only-from-licensed-sushi-chefs pufferfish (Takifugu rubripes) has about 365 million base pairs, with famously little of it looking like junk. By contrast, the marbled lungfish (Protopterus aethiopicus) has a humungous genome, 133 billion base pairs, which is apparently enough to code for three hundred different puffer fish with room to spare. Needless to say, the lungfish sequence features vast stretches of apparent junk DNA. Or does it need saying? If an ENCODE-style effort had used the marbled lungfish instead of humans as its template, would it have told us that 80% of its genome was functional? If it had done the pufferfish simultaneously, what would it have said about the difference between the two?
I'm glad that the new PNAS paper lays this out, because to my mind, that's a damned good question. One ENCODE-friendly answer is that the marbled lungfish has been under evolutionary pressure that the fugu pufferfish hasn't, and that it needs many more regulatory elements, spacers, and so on. But that, while not impossible, seems to be assuming the conclusion a bit too much. We can't look at a genome, decide that whatever we see is good and useful just because it's there, and then work out what its function must then be. That seems a bit too Panglossian: all is for the best in the best of all possible genomes, and if a lungfish needs one three hundreds times larger than the fugu fish, well, it must be three hundred times harder to be a lungfish? Such a disparity between the genomes of two organisms, both of them (to a first approximation) running the "fish program", could also be explained by there being little evolutionary pressure against filling your DNA sequence with old phone books.
Here's an editorial at Nature about this new paper:
There is a valuable and genuine debate here. To define what, if anything, the billions of non-protein-coding base pairs in the human genome do, and how they affect cellular and system-level processes, remains an important, open and debatable question. Ironically, it is a question that the language of the current debate may detract from. As Ewan Birney, co-director of the ENCODE project, noted on his blog: “Hindsight is a cruel and wonderful thing, and probably we could have achieved the same thing without generating this unneeded, confusing discussion on what we meant and how we said it”
He's right - the ENCODE team could have presented their results differently, but doing that would not have made a gigantic splash in the world press. There wouldn't have been dozens of headlines proclaiming the "end of junk DNA" and the news that 80% of the genome is functional. "Scientists unload huge pile of genomic data analysis" doesn't have the same zing. And there wouldn't have been the response inside the industry that has, in fact, occurred. This comment from my first blog post on the subject is still very much worth keeping in mind:
With my science hat on I love this stuff, stepping into the unknown, finding stuff out. With my pragmatic, applied science, hard-nosed Drug Discovery hat on, I know that it is not going to deliver over the time frame of any investment we can afford to make, so we should stay away.
However, in my big Pharma, senior leaders are already jumping up and down, fighting over who is going to lead the new initiative in this exciting new area, who is going to set up a new group, get new resources, set up collaborations, get promoted etc. Oh, and deliver candidates within 3 years.
Our response to new basic science is dumb and we are failing our investors and patients. And we don't learn.
+ TrackBacks (0) | Category: Biological News
March 7, 2013
Every so often I've mentioned some of the work being done with atomic force microscopy (AFM), and how it might apply to medicinal chemistry. It's been used to confirm a natural product structural assignment, and then there are images like these. Now comes a report of probing a binding site with the technique. The experimental setup is shown at left. The group (a mixed team from Linz, Vienna, and Berlin) reconstituted functional uncoupling protein 1 (UCP1) in a lipid bilayer on a mica surface. Then they ran two different kinds of ATM tips across them - one with an ATP molecule attached, and another with an anti-UCP1 antibody, and with different tether links on them as well.
What they found was that ATP seems to be able to bind to either side of the protein (some of the UCPs in the bilayer were upside down). There also appears to be only one nucleotide binding site per UCP (in accordance with the sequence). That site is about 1.27 nM down into the central pore, which could well be a particular residue (R182) that is thought to protrude into the pore space. Interestingly, although ATP can bind while coming in from either direction, it has to go in deeper from one side than the other (which shows up in the measurements with different tether lengths). And the leads to the hypothesis that the deeper-binding mode sets off conformational changes in the protein that the shallow-binding mode doesn't - which could explain how the protein is able to function while its cytosolic side is being exposed to high concentrations of ATP.
For some reason, these sorts of direct physical measurements weird me out more than spectroscopic studies. Shining light or X-rays into something (or putting it into a magnetic field) just seems more removed. But a single molecule on an AFM tip seems, when a person's hand is on the dial, to somehow be the equivalent of a long, thin stick that we're using to poke the atomic-level structure. What can I say; a vivid imagination is no particular handicap in this business!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
February 25, 2013
Last fall we had the landslide of data from the ENCODE project, along with a similar landslide of headlines proclaiming that 80% of the human genome was functional. That link shows that many people (myself included) were skeptical of this conclusion at the time, and since then others have weighed in with their own doubts.
A new paper, from Dan Graur at Houston (and co-authors from Houston and Johns Hopkins) is really stirring things up. And whether you agree with its authors or not, it's well worth reading - you just don't see thunderous dissents like this one in the scientific literature very often. Here, try this out:
Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that (at least 70%) of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect.
Other than that, things are fine. The paper goes on to detailed objections in each of those categories, and the tone does not moderate. One of the biggest objections is around the use of the word "function". The authors are at pains to distinguish selected effect functions from causal role functions, and claim that one of the biggest shortcomings of the ENCODE claims is that they blur this boundary. "Selected effects" are what most of us think about as well-proven functions: a TATAAA sequence in the genome binds a transcription factor, with effects on the gene(s) downstream of it. If there is a mutation in this sequence, there will almost certainly be functional consequences (and these will almost certainly be bad). If, however, imagine a random sequence of nucelotides that's close enough to TATAAA to bind a transcription factor. But in this case, there are no functional consequences - genes aren't transcribed differently, and nothing really happens other than the transcription factor parking there once in a while. That's a "causal role" function, and the whopping majority of the ENCODE functions appear to be in this class. "It looks sort of like something that has a function, therefore it has one". And while this can lead to discoveries, you have to be careful:
The causal role concept of function can lead to bizarre outcomes in the biological sciences. For example, while the selected effect function of the heart can be stated unambiguously to be the pumping of blood, the heart may be assigned many additional causal role functions, such as adding 300 grams to body weight, producing sounds, and preventing the pericardium from deflating onto itself. As a result, most biologists use the selected effect concept of function. . .
A mutation in that random TATAAA-like sequence would be expected to be silent compared to what would happen in a real binding motif. So one would want to know what percent of the genome is under selection pressure - that is, what part of it is unlikely to be mutatable without something happening. Those studies are where we get the figures of perhaps 10% of the DNA sequence being functional. Almost all of what ENCODE has declared to be functional, though, can show mutations with relative impunity:
From an evolutionary viewpoint, a function can be assigned to a DNA sequence if and only if it is possible to destroy it. All functional entities in the universe can be rendered nonfunctional by the ravages of time, entropy, mutation, and what have you. Unless a genomic functionality is actively protected by selection, it will accumulate deleterious mutations and will cease to be functional. The absurd alternative, which unfortunately was adopted by ENCODE, is to assume that no deleterious mutations can ever occur in the regions they have deemed to be functional. Such an assumption is akin to claiming that a television set left on and unattended will still be in working condition after a million years because no natural events, such as rust, erosion, static electricity, and earthquakes can affect it. The convoluted rationale for the decision to discard evolutionary conservation and constraint as the arbiters of functionality put forward by a lead ENCODE author (Stamatoyannopoulos 2012) is groundless and self-serving.
Basically, if you can't destroy a function by mutation, then there is no function to destroy. Even the most liberal definitions take this principle to apply to about 15% of the genome at most, so the 80%-or-more figure really does stand out. But this paper has more than philosophical objections to the ENCODE work. They point out that the consortium used tumor cell lines for its work, and that these are notoriously permissive in their transcription. One of the principles behind the 80% figure is that "if it gets transcribed, it must have a function", but you can't say that about HeLa cells and the like, which read off all sorts of pseudogenes and such (introns, mobile DNA elements, etc.)
One of the other criteria the ENCODE studies used for assigning function was histone modification. Now, this bears on a lot of hot topics in drug discovery these days, because an awful lot of time and effort is going into such epigenetic mechanisms. But (as this paper notes), this recent study illustrated that all histone modifications are not equal - there may, in fact, be a large number of silent ones. Another ENCODE criterion had to do with open (accessible) regions of chromatin, but there's a potential problem here, too:
They also found that more than 80% of the transcription start sites were contained within open chromatin regions. In yet another breathtaking example of affirming the consequent, ENCODE makes the reverse claim, and adds all open chromatin regions to the “functional” pile, turning the mostly true statement “most transcription start sites are found within open chromatin regions” into the entirely false statement “most open chromatin regions are functional transcription start sites.”
Similar arguments apply to the 8.5% of the genome that ENCODE assigns to transcription factor binding sites. When you actually try to experimentally verify function for such things, the huge majority of them fall out. (It's also noted that there are some oddities in ENCODE's definitions here - for example, they seem to be annotating 500-base stretches as transcription factor binding sites, when most of the verified ones are below 15 bases in length).
Now, it's true that the ENCODE studies did try to address the idea of selection on all these functional sequences. But this new paper has a lot of very caustic things to say about the way this was done, and I'll refer you to it for the full picture. To give you some idea, though:
By choosing primate specific regions only, ENCODE effectively removed everything that is of interest functionally (e.g., protein coding and RNA-specifying genes as well as evolutionarily conserved regulatory regions). What was left consisted among others of dead transposable and retrotransposable elements. . .
. . .Because polymorphic sites were defined by using all three human samples, the removal of two samples had the unfortunate effect of turning some polymorphic sites into monomorphic ones. As a consequence, the ENCODE data includes 2,136 alleles each with a frequency of exactly 0. In a miraculous feat of “next generation” science, the ENCODE authors were able to determine the frequencies of nonexistent derived alleles.
That last part brings up one of the objections that many people many have to this paper - it does take on a rather bitter tone. I actually don't mind it - who am I to object, given some of the things I've said on this blog? But it could be counterproductive, leading to arguments over the insults rather than arguments over the things being insulted (and over whether they're worthy of the scorn). People could end up waving their hands and running around shouting in all the smoke, rather than figuring out how much fire there is and where it's burning. The last paragraph of the paper is a good illustration:
The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.
Well, maybe that was necessary. The amount of media hype was huge, and the only way to counter it might be to try to generate a similar amount of noise. It might be working, or starting to work - normally, a paper like this would get no popular press coverage at all. But will it make CNN? The Science section of the New York Times? ENCODE's results certainly did.
But what the general public things about this controversy is secondary. The real fight is going to be here in the sciences, and some of it is going to spill out of academia and into the drug industry. As mentioned above, a lot of companies are looking at epigenetic targets, and a lot of companies would (in general) very much like to hear that there are a lot more potential drug targets than we know about. That was what drove the genomics frenzy back in 1999-2000, an era that was not without its consequences. The coming of the ENCODE data was (for some people) the long-delayed vindication of the idea that gene sequencing was going to lead to a vast landscape of new disease targets. There was already a comment on my entry at the time suggesting that some industrial researchers were jumping on the ENCODE work as a new area to work in, and it wouldn't surprise me to see many others thinking similarly.
But we're going to have to be careful. Transcription factors and epigenetic mechanisms are hard enough to work on, even when they're carefully validated. Chasing after ephemeral ones would truly be a waste of time. . .
More reactions around the science blogging world: Wavefunction, Pharyngula, SciLogs, Openhelix. And there are (and will be) many more.
+ TrackBacks (0) | Category: Biological News
February 13, 2013
We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.
But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.
These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.
Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.
What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.
And there are a lot of potential reasons why this turns out to be so:
There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.
But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.
There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):
The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.
The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.
The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.
“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”
“That started us thinking,” he continued. “Is it the same in the mouse or not?”
What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.
We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.
+ TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases
February 12, 2013
Since I mentioned the NIH in the context of the Molecular Libraries business, I wanted to bring up something else that a reader sent along to me. There's a persistent figure that's floated whenever the agency talks about translational medicine: 4500 diseases. Here's an example:
Therapeutic development is a costly, complex and time-consuming process. In recent years, researchers have succeeded in identifying the causes of more than 4,500 diseases. But it has proven difficult to turn such knowledge into new therapies; effective treatments exist for only about 250 of these conditions.
It shows up again in this paper, just out, and elsewhere. But is it true?
Do we really know the causes of 4,500 diseases? Outside of different cancer cellular types and various infectious agents, are there even 4,500 diseases, total? And if not, how many are there, anyway, then? I ask because that figure seems rather high. There are a lot of single-point-mutation genetic disorders to which we can pretty confidently assign a cause, but some of them (cystic fibrosis, for example) are considered one disease even though they can be arrived at through a variety of mutations. Beyond that, do we really know the absolute molecular-level cause of, say, type II diabetes? (We know a lot of very strong candidates, but the interplay between them, now, there's the rub). Alzheimer's? Arthritis? Osteoporosis? Even in the cases where we have a good knowledge of what the proximate cause of the trouble is (thyroid insufficiency, say, or Type I diabetes), do we really know what brought on that state, or how to prevent it? Sometimes, but not very often, is my impression. So where does this figure come from?
The best guess is here, GeneMap. But read the fine print: "Phenotypes include single-gene mendelian disorders, traits, some susceptibilities to complex disease . . . and some somatic cell genetic disease. . ." My guess is that a lot of what's under that banner does not rise to "knowing the cause", but I'd welcome being corrected on that point.
+ TrackBacks (0) | Category: Biological News
January 30, 2013
Here are some angry views that I don't necessarily endorse, but I can't say that they're completely wrong, either. A programmer bids an angry farewell to the bioinformatics world:
Bioinformatics is an attempt to make molecular biology relevant to reality. All the molecular biologists, devoid of skills beyond those of a laboratory technician, cried out for the mathematicians and programmers to magically extract science from their mountain of shitty results.
And so the programmers descended and built giant databases where huge numbers of shitty results could be searched quickly. They wrote algorithms to organize shitty results into trees and make pretty graphs of them, and the molecular biologists carefully avoided telling the programmers the actual quality of the results. When it became obvious to everyone involved that a class of results was worthless, such as microarray data, there was a rush of handwaving about “not really quantitative, but we can draw qualitative conclusions” followed by a hasty switch to a new technique that had not yet been proved worthless.
And the databases grew, and everyone annotated their data by searching the databases, then submitted in turn. No one seems to have pointed out that this makes your database a reflection of your database, not a reflection of reality. Pull out an annotation in GenBank today and it’s not very long odds that it’s completely wrong.
That's unfair to molecular biologists, but is it unfair to the state of bioinformatic databases? Comments welcome. . .
Update: more comments on this at Ycombinator.
+ TrackBacks (0) | Category: Biological News | In Silico
January 15, 2013
Like many people, I have a weakness for "We've had it all wrong!" explanations. Here's another one, or part of one: is obesity an infectious disease?
During our clinical studies, we found that Enterobacter, a genus of opportunistic, endotoxin-producing pathogens, made up 35% of the gut bacteria in a morbidly obese volunteer (weight 174.8 kg, body mass index 58.8 kg m−2) suffering from diabetes, hypertension and other serious metabolic deteriorations. . .
. . .After 9 weeks on (a special diet), this Enterobacter population in the volunteer's gut reduced to 1.8%, and became undetectable by the end of the 23-week trial, as shown in the clone library analysis. The serum–endotoxin load, measured as LPS-binding protein, dropped markedly during weight loss, along with substantial improvement of inflammation, decreased level of interleukin-6 and increased adiponectin. Metagenomic sequencing of the volunteer's fecal samples at 0, 9 and 23 weeks on the WTP diet confirmed that during weight loss, the Enterobacteriaceae family was the most significantly reduced population. . .
They went on to do the full Koch workup, by taking an isolated Enterobacter strain from the human patient and introducing it into gnotobiotic (germ-free) mice. These mice are usually somewhat resistant to becoming obese on a high-fat diet, but after being inoculated with the bacterial sample, they put on substantial weight, became insulin resistant, and showed numerous (consistent) alterations in their lipid and glucose handling pathways. Interestingly, the germ-free mice that were inoculated with bacteria and fed normal chow did not show these effects.
The hypothesis is that the endotoxin-producing bacteria are causing a low-grade chronic inflammation in the gut, which is exacerbated to a more systemic form by the handling of excess lipids and fatty acids. The endotoxin itself may be swept up in the chylomicrons and translocated through the gut wall. The summary:
. . .This work suggests that the overgrowth of an endotoxin-producing gut bacterium is a contributing factor to, rather than a consequence of, the metabolic deteriorations in its human host. In fact, this strain B29 is probably not the only contributor to human obesity in vivo, and its relative contribution needs to be assessed. Nevertheless, by following the protocol established in this study, we hope to identify more such obesity-inducing bacteria from various human populations, gain a better understanding of the molecular mechanisms of their interactions with other members of the gut microbiota, diet and host for obesity, and develop new strategies for reducing the devastating epidemic of metabolic diseases.
Considering the bacterial origin of ulcers, I think this is a theory that needs to be taken seriously, and I'm glad to see it getting checked out. We've been hearing a lot the last few years about the interaction between human physiology and our associated bacterial population, but the attention is deserved. The problem is, we're only beginning to understand what these ecosystems are like, how they can be disordered, and what the consequences are. Anyone telling you that they have it figured out at this point is probably trying to sell you something. It's worth the time to figure out, though. . .
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases
January 14, 2013
Picking up on that reactive oxygen species (ROS) business from the other day (James Watson's paper suggesting that it could be a key anticancer pathway), I wanted to mention this new paper, called to my attention this morning by a reader. It's from a group at Manchester studying regeneration of tissue in Xenopus tadpoles, and they note high levels of intracellular hydrogen peroxide in the regenerating tissue. Moreover, antioxidant treatment impaired the regeneration, as did genetic manipulation of ROS generation.
Now, inflammatory cells are known to produce plenty of ROS, and they're also involved in tissue injury. But that doesn't seem to be quite the connection here, because the tissue ROS levels peaked before the recruitment of such cells did. (This is consistent with previous work in zebrafish, which also showed hydrogen peroxide as an essential signal in wound healing). The Manchester group was able to genetically impair ROS generation by knocking down a protein in the NOX enzyme complex, a major source of ROS production. This also impaired regeneration, an effect that could be reversed by a rescue competition experiment.
Further experiments implicated Wnt/bet-catenin signaling in this process, which is certainly plausible, given the position of that cascade in cellular processes. That also ties in with a 2006 report of hydrogen peroxide signaling through this pathway (via a protein called nucleoredoxin.
You can see where this work is going, and so can the authors:
. . .our work suggests that increased production of ROS plays a critical role in facilitating Wnt signalling following injury, and therefore allows the regeneration program to commence. Given the ubiquitous role of Wnt signalling in regenerative events, this finding is intriguing as it might provide a general mechanism for injury-induced Wnt signalling activation across all regeneration systems, and furthermore, manipulating ROS may provide a means to induce the activation of a regenerative program in those cases where regeneration is normally limited.
Most of us reading this site belong to one of those regeneration-limited species, but perhaps it doesn't always have to be this way? Taken together, it does indeed look like (1) ROS (hydrogen peroxide among others) are important intracellular signaling molecules (which conclusion has been clear for some time now), and (2) the pathways involved are crucial growth and regulatory ones, relating to apoptosis, wound healing, cancer, the effects of exercise, all very nontrivial things indeed, and (3) these pathways would appear to be very high-value ones for pharmaceutical intervention (stay tuned).
As a side note, Paracelsus has once again been reaffirmed: the dose does indeed make the poison, as does its timing and location. Water can drown you, oxygen can help burn you, but both of them keep you alive.
+ TrackBacks (0) | Category: Biological News
January 11, 2013
The line under James Watson's name reads, of course, "Co-discoverer of DNA's structure. Nobel Prize". But it could also read "Provocateur", since he's been pretty good at that over the years. He seems to have the right personality for it - both The Double Helix (fancy new edition there) and its notorious follow-up volume Avoid Boring People illustrate the point. There are any number of people who've interacted with him over the years who can't stand the guy.
But it would be a simpler world if everyone that we found hard to take was wrong about everything, wouldn't it? I bring this up because Watson has published an article, again deliberately provocative, called "Oxidants, Antioxidants, and the Current Incurability of Metastatic Cancers". Here's the thesis:
The vast majority of all agents used to directly kill cancer cells (ionizing radiation, most chemotherapeutic agents and some targeted therapies) work through either directly or indirectly generating reactive oxygen species that block key steps in the cell cycle. As mesenchymal cancers evolve from their epithelial cell progenitors, they almost inevitably possess much-heightened amounts of antioxidants that effectively block otherwise highly effective oxidant therapies.
The article is interesting throughout, but can fairly be described as "rambling". He starts with details of the complexity of cancerous mutations, which is a topic that's come up around here several times (as it does wherever potential cancer therapies are discussed, at least by people with some idea of what they're talking about). Watson is paying particular attention here to mesenchymal tumors:
Resistance to gene-targeted anti-cancer drugs also comes about as a consequence of the radical changes in underlying patterns of gene expression that accompany the epithelial-to-mesenchymal cell transitions (EMTs) that cancer cells undergo when their surrounding environments become hypoxic . EMTs generate free-floating mesenchymal cells whose flexible shapes and still high ATP-generating potential give them the capacity for amoeboid cell-like movements that let them metastasize to other body locations (brain, liver, lungs). Only when they have so moved do most cancers become truly life-threatening. . .
. . .Unfortunately, the inherently very large number of proteins whose expression goes either up or down as the mesenchymal cancer cells move out of quiescent states into the cell cycle makes it still very tricky to know, beyond the cytokines, what other driver proteins to focus on for drug development.
That it does. He makes the case (as have others) that Myc could be one of the most important protein targets - and notes (as have others!) that drug discovery efforts against the Myc pathway have run into many difficulties. There's a good amount of discussion about BRD4 compounds as a way to target Myc. Then he gets down to the title of the paper and starts talking about reactive oxygen species (ROS). Links in the section below added by me:
That elesclomol promotes apoptosis through ROS generation raises the question whether much more, if not most, programmed cell death caused by anti-cancer therapies is also ROS-induced. Long puzzling has been why the highly oxygen sensitive ‘hypoxia-inducible transcription factor’ HIF1α is inactivated by both the, until now thought very differently acting, ‘microtubule binding’ anti-cancer taxanes such as paclitaxel and the anti-cancer DNA intercalating topoisomerases such as topotecan or doxorubicin, as well as by frame-shifting mutagens such as acriflavine. All these seemingly unrelated facts finally make sense by postulating that not only does ionizing radiation produce apoptosis through ROS but also today's most effective anti-cancer chemotherapeutic agents as well as the most efficient frame-shifting mutagens induce apoptosis through generating the synthesis of ROS. That the taxane paclitaxel generates ROS through its binding to DNA became known from experiments showing that its relative effectiveness against cancer cell lines of widely different sensitivity is inversely correlated with their respective antioxidant capacity. A common ROS-mediated way through which almost all anti-cancer agents induce apoptosis explains why cancers that become resistant to chemotherapeutic control become equally resistant to ionizing radiotherapy. . .
. . .The fact that cancer cells largely driven by RAS and Myc are among the most difficult to treat may thus often be due to their high levels of ROS-destroying antioxidants. Whether their high antioxidative level totally explains the effective incurability of pancreatic cancer remains to be shown. The fact that late-stage cancers frequently have multiple copies of RAS and MYC oncogenes strongly hints that their general incurability more than occasionally arises from high antioxidant levels.
He adduces a number of other supporting evidence for this line of thought, and then he gets to the take-home message:
For as long as I have been focused on the understanding and curing of cancer (I taught a course on Cancer at Harvard in the autumn of 1959), well-intentioned individuals have been consuming antioxidative nutritional supplements as cancer preventatives if not actual therapies. The past, most prominent scientific proponent of their value was the great Caltech chemist, Linus Pauling, who near the end of his illustrious career wrote a book with Ewan Cameron in 1979, Cancer and Vitamin C, about vitamin C's great potential as an anti-cancer agent . At the time of his death from prostate cancer in 1994, at the age of 93, Linus was taking 12 g of vitamin C every day. In light of the recent data strongly hinting that much of late-stage cancer's untreatability may arise from its possession of too many antioxidants, the time has come to seriously ask whether antioxidant use much more likely causes than prevents cancer.
All in all, the by now vast number of nutritional intervention trials using the antioxidants β-carotene, vitamin A, vitamin C, vitamin E and selenium have shown no obvious effectiveness in preventing gastrointestinal cancer nor in lengthening mortality . In fact, they seem to slightly shorten the lives of those who take them. Future data may, in fact, show that antioxidant use, particularly that of vitamin E, leads to a small number of cancers that would not have come into existence but for antioxidant supplementation. Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer.
Now this is quite interesting. The first thing I thought of when I read this was the work on ROS in exercise. This showed that taking antioxidants appeared to cancel out the benefits of exercise, probably because reactive oxygen species are the intracellular signal that sets them off. Taken together, I think we need to seriously consider whether efforts to control ROS are, in fact, completely misguided. They are, perhaps, "essential poisons", without which our cellular metabolism loses its way.
Update: I should also note the work of Joan Brugge's lab in this area, blogged about here. Taken together, you'd really have to advise against cancer patients taking antioxidants, wouldn't you?
Watson ends up the article by suggesting, none too diplomatically, that much current cancer research is misguided:
The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope . Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute's (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today's cancer research establishments. They still are too closely wedded to moving forward with cocktails of drugs targeted against the growth promoting molecules (such as HER2, RAS, RAF, MEK, ERK, PI3K, AKT and mTOR) of signal transduction pathways instead of against Myc molecules that specifically promote the cell cycle.
He singles out the Cancer Genome Atlas project as an example of this sort of thing, saying that while he initially supported it, he no longer does. It will, he maintains, tend to find mostly cancer cell "drivers" as opposed to "vulnerabilities". He's more optimistic about a big RNAi screening effort that's underway at his own Cold Spring Harbor, although he admits that this enthusiasm is "far from universally shared".
We'll find out which is the more productive approach - I'm glad that they're all running, personally, because I don' think I know enough to bet it all on one color. If Watson is right, Pfizer might be the biggest beneficiary in the drug industry - if, and it's a big if, the RNAi screening unearths druggable targets. This is going to be a long-running story - I'm sure that we'll be coming back to it again and again. . .
+ TrackBacks (0) | Category: Biological News | Cancer
December 21, 2012
This can't be good. A retraction in PNAS on some RNA-driven cell death research from a lab at Caltech:
Anomalous experimental results observed by multiple members of the Pierce lab during follow-on studies raised concerns of possible research misconduct. An investigation committee of faculty at the California Institute of Technology indicated in its final report on this matter that the preponderance of the evidence and the reasons detailed in the report established that the first author falsified and misrepresented data published in this paper. An investigation at the United States Office of Research Integrity is ongoing.
As that link from Retraction Watch notes, the first author himself was not one of the signees of that retraction statement - as one might well think - and he now appears to be living in London. He appears to have left quite a mess behind in Pasadena.
+ TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature
December 12, 2012
Rongxiang Xu is upset with this year's Nobel Prize award for stem cell research. He believes that work he did is so closely related to the subject of the prize that. . .he wants his name on it? No, apparently not. That he wants some of the prize money? Nope, not that either. That he thinks the prize was wrongly awarded? No, he's not claiming that.
What he's claiming is that the Nobel Committee has defamed his reputation as a stem cell pioneer by leaving him off, and he wants damages. Now, this is a new one, as far as I know. The closest example comes from 2003, when there was an ugly controversy over the award for NMR imaging (here's a post from the early days of this blog about it). Dr. Raymond Damadian took out strongly worded (read "hopping mad") advertisement in major newspapers claiming that the Nobel Committee had gotten the award wrong, and that he should have been on it. In vain. The Nobel Committee(s) have never backed down in such a case - although there have been some where you could make a pretty good argument - and they never will, as far as I can see.
Xu, who works in Los Angeles, is founder and chairman of the Chinese regenerative medicine company MEBO International Group. The company sells a proprietary moist-exposed burn ointment (MEBO) that induces "physiological repair and regeneration of extensively wounded skin," according to the company's website. Application of the wound ointment, along with other treatments, reportedly induces embryonic epidermal stem cells to grow in adult human skin cells. . .
. . .Xu's team allegedly awakened intact mature somatic cells to turn to pluripotent stem cells without engineering in 2000. Therefore, Xu claims, the Nobel statement undermines his accomplishments, defaming his reputation.
Now, I realize that I'm helping, in my small way, to give this guy publicity, which is one of the things he most wants out of this effort. But let me make myself clear - I'm giving him publicity in order to roll my eyes at him. I look forward to following Xu's progress through the legal system, and I'll bet his legal team looks forward to it as well, as long as things are kept on a steady payment basis.
+ TrackBacks (0) | Category: Biological News
November 8, 2012
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical Biology | Chemical News
October 10, 2012
A deserved Nobel? Absolutely. But the grousing has already started. The 2012 Nobel Prize for Chemistry has gone to Bob Lefkowitz (Duke) and Brian Kobilka (Stanford) for GPCRs, G-protein coupled receptors.
Update: here's an excellent overview of Kobilka's career and research.
Everyone who's done drug discovery knows what GPCRs are, and most of us have worked on molecules to target them at one point or another. At least a third of marketed drugs, after all, are GPCR ligands, so their importance is hard to overstate. That's why I say that this Nobel is completely deserved (and has been anticipated for some time now). I've written about them numerous times here over the years, and I'm going to forgo the chance to explain them in detail again. For more information I can recommend the Nobel site's popular background and their more detailed scientific background - they've already done the explanatory work.
I will say a bit about where GPCRs fit into the world of drug targets, though, since they've been so important to pharma R&D. Everyone had realized, for decades (more like centuries), that cells had to be able to send signal to each other somehow. But how was this done? No matter what, there had to be some sort of transducer mechanism, because any signal would arrive on the outside of the cell membrane and then (somehow) be carried across and set off activity inside the cell. As it became clear that small molecules (both the body's own and artificial ones from outside) could have signaling effects, the idea of a "receptor" became inescapable. But it's worth remembering that up until the mid-1970s you could find people - in print, no less - warning readers that the idea of a receptor as a distinct physical object was unproven and could be an unwarranted assumption. Everyone knew that molecular signals were being handled somehow, but it was very unclear what (or how many) pieces there were to the process. This year's award recognizes the lifting of that fog.
It also recognizes something else very important, and here I want to rally my fellow chemists. As I mentioned above, the complaints are already starting that this is yet another chemistry prize that's been given to the biologists. But this is looking at things the wrong way around. Biology isn't invading chemistry - biology is turning into chemistry. Giving the prize this year to Lefkowitz and Kobilka takes us from the first cloning of a GPCR (biology, biology all the way) to a detailed understanding of their molecular structure (chemistry!) And that's the story of molecular biology for you, right there. As it lives up to its name, its practitioners have had to start thinking of their tools and targets as real, distinct molecules. They have shapes, they have functional groups, they have stereochemistry and localized charges and conformations. They're chemicals. That's what kept occurring to me at the recent chemical biology conference I attended: anyone who's serious about understanding this stuff has to understand it in terms of chemistry, not in terms of "this square interacts with this circle, which has an arrow to this box over here, which cycles to this oval over here with a name in the middle of it. . ." Those old schematics will only take you so far.
So, my fellow chemists, cheer the hell up already. Vast new territories are opening up to our expertise and our ways of looking at the world, and we're going to be needed to understand what to do next. Too many people are making me think of those who objected to the Louisiana Purchase or the annexation of California, who wondered what we could possibly ever want with those trackless wastelands to the West and how they could ever be part of the country. Looking at molecular biology and sighing "But it's not chemistry. . ." misses the point. I've had to come around to this view myself, but more and more I'm thinking it's the right one.
+ TrackBacks (0) | Category: Biological News | Chemical News
September 13, 2012
You'll have heard about the massive data wave that hit (30 papers!) courtesy of the ENCODE project. That stands for Encyclopedia of DNA Elements, and it's been a multiyear effort to go beyond the bare sequence of human DNA and look for functional elements. We already know that only around 1% of the human sequence is made up of what we can recognize as real, traditional genes: stretches that code for proteins, have start and stop codons, and so on. And it's not like that's so straightforward, either, what with all the introns and whatnot. But that leaves an awful lot of DNA that's traditionally been known by the disparaging name of "junk", and sure it can't just be that - can it?
Some of it does its best to make you think that way, for sure. Transposable elements like Alu sequences, which are repeated relentlessly hundreds of thousands of times throughout the human DNA sequence, must either be junk, inert spacer, or so wildly important that we just can't have too many copies of them. But DNA is three-dimensional (and how), and its winding and unwinding is crucial to gene expression. Surely a good amount of that apparently useless stuff is involved in these processes and other epigenetic phenomena.
And the ENCODE group has indeed discovered a lot of this sort of thing. But as this excellent overview from Brendan Maher at Nature shows, it hasn't discovered quite as many as the headlines might lead you to think. (And neither has it demolished the idea that all the 99% of noncoding DNA is junk, because you can't find anyone who believed that one, either). The figure that's in all the press writeups is that this work has assigned functions for 80% of the human genome, which would be an astonishing figure on several levels. For one thing, it would mean that we'd certainly missed an awful lot before, and for another, it would mean that the genome is a heck of a lot more information-rich than we ever thought it might be.
But neither of those quite seem to be the case. It all depends on what you mean by "functional", and opinions most definitely vary. See this post by Ed Yong for some of the categories. which range out to some pretty broad, inclusive definitions of "function". A better estimate is that maybe 20% of the genome can directly influence gene expression, which is very interesting and useful, but ain't no 80%, either. That Nature post provides a clear summary of the arguments about these figures.
But even that more-solid 20% figure is going to keep us all busy for a long time. Learning how to affect these gene transcription mechanisms is going should be a very important route to new therapies. If you remember all the hype about how the genome was going to unlock cures to everything - well, this is the level we're actually going to have to work at to make anything in that line come true. There's a lot of work to be done, though. Somehow, different genes are expressed at different times, in different people, in response to a huge variety of environmental cues. It's quite a tangle, but in theory, it's a tangle that can be unraveled, and as it does, it's going to provide a lot of potential targets for therapy. Not easy targets, mind you - those are probably gone - but targets nonetheless.
One of the best ways to get a handle on all this work is this very interesting literature experiment at Nature - a portal into the ENCODE project data, organized thematically, and with access to all the papers involved across the different journals. If you're interested in epigenetics at all, this is a fine place to read up on the results of this work. And if you're not, it's still worth exploring to see how the scientific literature might be presented and curated. This approach, it seems to me, potentially adds a great deal of value. Eventually, the PDF-driven looks-like-a-page approach to the literature will go extinct, and something else will replace it. Some of it might look a bit like this.
Note, just for housekeeping purposes - I wrote this post for last Friday, but only realized today that it didn't publish, thus the lack of an entry that day. So here it is, better late, I hope, than never. There's more to say about epigenetics, too, naturally. . .
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
September 6, 2012
The NIH has been cutting back on its funding (via the National Libraries of Medicine) for a number of external projects. One of those on the chopping block is the Biological Magnetic Resonance Bank (BMRB), at Wisconsin:
The BMRB mission statement is to “collect, annotate, archive and disseminate (worldwide in the public domain)” NMR data on biological macromolecules and metabolites, to “empower scientists” and to “support further development of the field.” Despite its indisputable success in achieving these goals, the BMRB is facing serious funding challenges.
Since 1990, the BMRB has received continuous support from the National Library of Medicine (NLM), at the US National Institutes of Health, in the form of five-year grants. However, the BMRB obtained its latest grant renewal in 2009, accompanied by a sharp reduction in the funding level. It was also to be the last renewal, as the NLM announced that funding for all external centers would be phased out as their grants expire. Thus, as of today, the BMRB has no means of financial support after September 2014.
That editorial link above, from Nature Structural and Molecular Biology, also has a several other database projects formerly supported by the NLM. These are far enough outside my own field that I've never had call to use any of them as a medicinal chemist, but (as that last link shows) they are indeed used, and by plenty of researchers.
This problem won't be going away, since the volume of data produced these days shows no sign of any inflection points. Molecular genetics, protein biology, and structural biology in general are producing vast piles of material. Having as much of it as possible brought together and curated is clearly in the best interest of scientific research - but again, who pays?
+ TrackBacks (0) | Category: Biological News
August 16, 2012
How do enzymes work? People have been trying to answer that, in detail, for decades. There's no point in trying to do it without running down all those details, either, because we already know the broad picture: enzymes work by bringing reactive groups together under extremely favorable conditions so that reaction rates speed up tremendously. Great! But how do they bring those things together, how does their reactivity change, and what kinds of favorable conditions are we talking about here?
And some of this we know, too. You can see, in many enzyme active sites, that the protein is stabilizing the transition state of the reaction, lowering its energy so it's easier to jump over the hump to product. It wouldn't surprise me to see the energies of some starting materials being raised to effect that same barrier-lowering, although I don't know of any examples of that off the top of my head. But even this level of detail raises still more questions: what interactions are these that lower and raise these energies? How much of a price is paid, thermodynamically, to do these things, and how does that break out into entropic and enthalpic terms?
Some of those answers are known, to some degree, in some systems. But still more questions remain. One of the big ones has been the degree to which protein motion contributes to enzyme action. Now, we can see some big conformational changes taking place with some proteins, but what about the normal background motions? Intellectually, it makes sense that enzymes would have learned, over the millennia, to take advantage of this, since it's for sure that their structures are always vibrating. But proving that is another thing entirely.
Modern spectroscopy may have done the trick. This new paper from groups at Manchester and Oxford reports painstaking studies on B-12 dependent ethanolamine ammonia lyase. Not an enzyme I'd ever heard of, that one, but "enzymes I've never heard of" is a rather roomy category. It's an interesting one, though, partly because it goes through a free radical mechanism, and partly because it manages to speed things up by about a trillion-fold over the plain solution rate.
Just how it does that has been a mystery. There's no sign of any major enzyme conformational change as the substrate binds, for one thing. But using stopped-flow techniques with IR spectroscopy, as well as ultrafast time-resolved IR, there seem to be structural changes going on at the time scale of the actual reaction. It's hard to see this stuff, but it appears to be there - so what is it? Isotopic labeling experiments seem to say that these IR peaks represent a change in the protein, not the B12 cofactor. (There are plenty of cofactor changes going on, too, and teasing these new peaks out of all that signal was no small feat).
So this could be evidence for protein motion being important right at the enzymatic reaction itself. But I should point out that not everyone's buying that. Nature Chemistry had two back-to-back articles earlier this year, the first advocating this idea, and the second shooting it down. The case against this proposal - which would modify transition-state theory as it's usually understood - is that there can be a number of conformations with different reactivities, some of which take advantage of quantum-mechanical tunneling effects, but all of which perform "traditional" transition-state chemistry, each in their own way. Invoking fast motions (on the femtosecond time scale) to explain things is, in this view, a layer of complexity too far.
I realize that all this can sound pretty esoteric - it does even to full-time chemists, and if you're not a chemist, you probably stopped reading quite a while ago. But we really do need to figure out exactly how enzymes do their jobs, because we'd like to be able to do the same thing. Enzymatic reactions are, in most cases, so vastly superior to our own ways of doing chemistry that learning to make them to order would revolutionize things in several fields at once. We know this chemistry can be done - we see it happen, and the fact that we're alive and walking around depends on it - but we can't do it ourselves. Yet.
+ TrackBacks (0) | Category: Biological News | Chemical News
August 2, 2012
Here's a useful overview of the public-domain medicinal chemistry databases out there. It covers the big three databases in detail:
BindingDB (quantitative binding data to protein targets).
ChEMBL (wide range of med-chem data, overlaps a bit with PubChem).
PubChem (data from NIH Roadmap screen and many others).
And these others:
Binding MOAD (literature-annotated PDB data).
ChemSpider (26 million compounds from hundreds of data sources).
DrugBank (data on 6700 known drugs).
GRAC and IUPHAR-DB (data on GPCRs, ion channels, and nuclear receptors, and ligands for all of these).
PDBbind (more annotated PDB data).
PDSP Ki (data from UNC's psychoactive drug screening program)
SuperTarget (target-compound interaction database).
Therapeutic Targets Database(database of known and possible drug targets).
ZINC (21 million commercially available compounds, organized by class, downloadable in various formats).
There is the irony of a detail article on public-domain databases appearing behind the ACS paywall, but the literature is full of such moments as that. . .
+ TrackBacks (0) | Category: Biological News | Chemical News | Drug Assays
April 10, 2012
After that news of the Stanford professor who underwent just about every "omics" test known, I wrote that I didn't expect this sort of full-body monitoring to become routine in my own lifetime:
It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.
I can now adduce some evidence for that point of view. The Institute of Medicine has warned that a lot of biomarker work is spurious. The recent Duke University scandal has brought these problems into higher relief, but there are plenty of less egregious (and not even deliberate) examples that are still a problem:
The request for the IOM report stemmed in part from a series of events at Duke University in which researchers claimed that their genomics-based tests were reliable predictors of which chemotherapy would be most effective for specific cancer patients. Failure by many parties to detect or act on problems with key data and computational methods underlying the tests led to the inappropriate enrollment of patients in clinical trials, premature launch of companies, and retraction of dozens of research papers. Five years after they were first made public, the tests were acknowledged to be invalid.
Lack of clearly defined development and evaluation processes has caused several problems, noted the committee that wrote the report. Omics-based tests involve large data sets and complex algorithms, and investigators do not routinely make their data and computational procedures accessible to others who could independently verify them. The regulatory steps that investigators and research institutions should follow may be ignored or misunderstood. As a result, flaws and missteps can go unchecked.
So (Duke aside) the problem isn't fraud, so much as it is wishful thinking. And that's what statistical analysis is supposed to keep in check, but we're got to make sure that that's really happening. But to keep everyone honest, we also have to keep everything out there where multiple sets of eyes can check things over, and this isn't always happening:
Investigators should be required to make the data, computer codes, and computational procedures used to develop their tests publicly accessible for independent review and ensure that their data and steps are presented comprehensibly, the report says. Agencies and companies that fund omics research should require this disclosure and support the cost of independently managed databases to hold the information. Journals also should require researchers to disclose their data and codes at the time of a paper's submission. The computational procedures of candidate tests should be recorded and "locked down" before the start of analytical validation studies designed to assess their accuracy, the report adds.
This is (and has been for some years) a potentially huge field of medical research, with huge implications. But it hasn't been moving forward as quickly as everyone thought it would. We have to resist the temptation to speed things up by cutting corners, consciously or unconsciously.
+ TrackBacks (0) | Category: Biological News | Clinical Trials
April 6, 2012
We've talked about the NIH's Molecular Libraries Initiative here a few times, mostly in the context of whether it reached its goals, and what might happen now that it looks as if it might go away completely. Does make this item a little surprising?
Almost a decade ago, the US National Institutes of Health kicked off its Molecular Libraries Initiative to provide academic researchers with access to the high-throughput screening tools needed to identify new therapeutic compounds. Europe now seems keen on catching up.
Last month, the Innovative Medicines Initiative (IMI), a €2 billion ($2.6 billion) Brussels-based partnership between the European Commission and the European Federation of Pharmaceutical Industries and Associations (EFPIA), invited proposals to build a molecular screening facility for drug discovery in Europe that will combine the inquisitiveness of academic scientists with industry know-how. The IMI's call for tenders says the facility will counter “fragmentation” between these sectors.
I can definitely see the worth in that part of the initiative. Done properly, Screening Is Good. But they'll have to work carefully to make sure that their compound collection is worth screening, and to format the assays so that the results are worth looking at. Both those processes (library generation and high-throughput screening) are susceptible (are they ever) to "garbage in, garbage out" factors, and it's easy to kid yourself into thinking that you're doing something worthwhile just because you're staying so busy and you have so many compounds.
There's another part of this announcement that worries me a bit, though. Try this on for size:
Major pharmaceutical companies have more experience with high-throughput screening than do most academic institutes. Yet companies often limit tests of their closely held candidate chemicals to a fraction of potential disease targets. By pooling chemical libraries and screening against a more diverse set of targets—and identifying more molecular interactions—both academics and pharmaceutical companies stand to gain, says Hugh Laverty, an IMI project manager.
Well, sure, as I said above, Screening Is Good, when it's done right, and we do indeed stand to learn things we didn't know before. But is it really true that we in the industry only look at a "fraction of potential disease targets"? This sounds like someone who's keen to go after a lot of the tough ones; the protein-protein interactions, protein-nucleic acid interactions, and even further afield. Actually, I'd encourage these people to go for it - but with eyes open and brain engaged. The reason that we don't screen against such things as often is that hit rates tend to be very, very low, and even those are full of false positives and noise. In fact, for many of these things, "very, very low" is not distinguishable from "zero". Of course, in theory you just need one good hit, which is why I'm still encouraging people to take a crack. But you should do so knowing the odds, and be ready to give your results some serious scrutiny. If you think that there must be thousands of great things out there that the drug companies are just too lazy (or blinded by the thought of quick profits elsewhere) to pursue, you're not thinking this through well enough.
You might say that what these efforts are looking for are tool compounds, not drug candidates. And I think that's fine; tool compounds are valuable. But if you read that news link in the first paragraph, you'll see that they're already talking about how to manage milestone payments and the like. That makes me think that someone, at any rate, is imagining finding valuable drug candidates from this effort. The problem with that is that if you're screening all the thousands of drug targets that the companies are ignoring, you're by definition working with targets that aren't very validated. So any hits that you do find (and there may not be many, as said above) will still be against something that has a lot of work yet to be done on it. It's a bit early to be wondering how to distribute the cash rewards.
And if you're screening against validated targets, the set of those that don't have any good chemical matter against them already is smaller (and it's smaller for a reason). It's not that there aren't any, though: I'd nominate PTP1B as a well-defined enzymatic target that's just waiting for a good inhibitor to come along to see if it performs as well in humans as it does in, say, knockout mice. (It's both a metabolic target and a potential cancer target as well). Various compounds have been advanced over the years, but it's safe to say that they've been (for the most part) quite ugly and not as selective as they could have been. People are still whacking away at the target.
So any insight into decent-looking selective phosphatase inhibitors would be most welcome. And most unlikely, damn it all, but all great drug ideas are most unlikely. The people putting this initiative together will have a lot to balance.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Drug Assays
March 30, 2012
Back in 2009 I wrote about a paper that found a number of small (and ugly) molecules which affected the Hedgehog signaling pathway. At the time, I asked if anyone had done any selectivity studies with them, or looked for any SAR around them, because they didn't look very promising to me.
I'm glad to report that there's a follow-up from the same lab, and it's a good one. They've spent the last two years chasing these things down, and it appears that one series (the HPI-4 compound in that first link, which is open-access) really does have a specific molecular target (dynein).
There are a number of good experiments in the paper showing how they narrowed that down, and the whole thing is a good example of just how granular cellular biology can get: this pathway out of thousands, that particular part of the process, which turns out to be this protein because of the way it interacts in defined ways with a dozen others, and moreover, this particular binding site on that one protein. It's worth reading to see how they chased all this down, but I'll take you right to the ending and say that it's the ATP-binding site on dynein that looks like the target.
Collectively, these results indicate that ciliobrevins are specific, reversible inhibitors of disparate cytoplasmic dynein-dependent processes. Ciliobrevins do not perturb cellular mechanisms that are independent of dynein function, including actin cytoskeleton organization and the mitogen-activated protein kinase and phosphoinositol-3-kinase signalling pathways. . .The compounds do not broadly target members of the AAA+ ATPase family either, as they have no effect on p97-dependent degradation of endoplasmic-reticulum-associated proteins or Mcm2–7-mediated DNA unwinding. . .Our studies establish ciliobrevins as the first small molecules known specifically to inhibit cytoplasmic dynein in vitro and in live cells.
So congratulations to everyone involved, at Stanford, Rockefeller, and Northwestern. These ciliobrevins are perfect examples of tool compounds. This is how academic science is supposed to work, and now we can perhaps learn things about dynein that no one has been able to learn yet, and that will be knowledge that no one can take away once we've learned it.
+ TrackBacks (0) | Category: Biological News
March 23, 2012
I wanted to mention this news, since it's really the most wildly advanced form of "personalized medicine" that the world has yet seen. As detailed in this paper, Stanford professor Michael Snyder spent months taking multiple, powerful, wide-ranging looks at his own biochemistry: genomic sequences, metabolite levels, microRNAs, gene transcripts, pretty much the whole expensive high-tech kitchen sink. No one's ever done this to one person over an extended period - heck, until the last few years, no one's ever been able to do this - so Snyder and the team were interested to see what might come up. A number of odd things did:
Snyder had a cold at the first blood draw, which allowed the researchers to track how a rhinovirus infection alters the human body in perhaps more detail than ever before. The initial sequencing of his genome had also showed that he had an increased risk for type 2 diabetes, but he initially paid that little heed because he did not know anyone in his family who had had the disease and he himself was not overweight. Still he and his team decided to closely monitor biomarkers associated with the diabetes, including insulin and glucose pathways. The scientist later became infected with respiratory syncytial virus, and his group saw that a sharp rise in glucose levels followed almost immediately. "We weren't expecting that," Snyder says. "I went to get a very fancy glucose metabolism test at Stanford and the woman looked at me and said, 'There's no way you have diabetes.' I said, 'I know that's true, but my genome says something funny here.' "
A physician later diagnosed Snyder with type 2 diabetes, leading him to change his diet and increase his exercise. It took 6 months for his glucose levels to return to normal. "My interpretation of this, which is not unreasonable, is that my genome has me predisposed to diabetes and the viral infection triggered it," says Snyder, who acknowledges that no known link currently exists between type 2 diabetes and infection.
There may well be a link, but it may well also only be in Michael Snyder. Or perhaps in him and the (x) per cent of the population that share certain particular metabolic and genomic alignments with him. Since this is an N of 1 experiment if ever there was one, we really have no idea. It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.
But all of these technologies are getting cheaper. It's not around the corner, but I can imagine a day when people have continuous blood monitoring of this sort, a constant metabolic/genomic watchdog application that lets you know how things are going in there. Keep in mind, though, that I have a very lively imagination. I don't expect this (for better or worse) in my own lifetime. The very first explorers are just hacking their way into thickets of biochemistry larger and more tangled than the Amazon jungle - it's going to be a while before the shuttle vans start running.
+ TrackBacks (0) | Category: Biological News
January 27, 2012
Roche is not only a big drug company, it's a big diagnostics company. And that's what's driving their unsolicited bid for Illumina, a gene-sequencing company from San Diego. Illumina has been one of the big players in the "How quickly and cheaply can we sequence a person's entire genome" game, and apparently Roche believes that there's something in it for them.
But as that Reuters link above shows, a lot of other people don't agree, and would rather partner than acquire (Chris Viehbacher, CEO of Sanofi, seems to have been waiting for the opportunity to unburden himself of thoughts to that effect). He may well be right. Sequencing has been a can-you-top-this field for some time, and I don't think that the process is finished yet. What if you buy a technology that's superseded before it has the time to pay off? What if the market for sequencing doesn't get as large, as quickly, as you're hoping? Those were Illumina's worries, and now they're going to be Roche's; you can't buy the promise without buying those, too.
Matthew Herper at Forbes is having very similar thoughts, and points out that Roche has done this sort of thing before. For now, we'll see what Illumina might be able to come up with to avoid being Roched.
+ TrackBacks (0) | Category: Biological News | Business and Markets
January 18, 2012
If you've been looking around the literature over the last couple of years, you'll have seen an awful lot of excitement about epigenetic mechanisms. (Here's a whole book on that very subject, for the hard core). Just do a Google search with "epigenetic" and "drug discovery" in it, any combination you like, and then stand back. Articles, reviews, conferences, vendors, journals, startups - it's all there.
Epigenetics refers to the various paths - and there are a bunch of them - to modify gene expression downstream of just the plain ol' DNA sequence. A lot of these are, as you'd imagine, involved in the way that the DNA itself is wound (and unwound) for expression. So you see enzymes that add and remove various switches to the outside of various histone proteins. You have histone acyltransferases (HATs) and histone deacetylases (HDACs), methyltransferases and demethylases, and so on. Then there are bromodomains (the binding sites for those acetylated histones) and several other mechanisms, all of which add up to plenty o' drug targets.
Or do they? There are HDAC compounds out there in oncology, to be sure, and oncology is where a lot of these other mechanisms are being looked at most intensively. You've got a good chance of finding aberrant protein expression levels in cancer cells, you have a lot of unmet medical need, a lot of potential different patient populations, and a greater tolerance for side effects. All of that argues for cancer as a proving ground, although it's certainly not the last word. But in any therapeutic area, people are going to have to wrestle with a lot of other issues.
Just looking over the literature can make you both enthusiastic and wary. There's an awful lot of regulatory machinery in this area, and it's for sure that it isn't there for jollies. (You'd imagine that selection pressure would operate pretty ruthlessly at the level of gene expression). And there are, of course, an awful lot of different genes whose expression has to be regulated, at different levels, in different cell types, at different phases of their development, and in response to different environmental signals. We don't understand a whole heck of a lot of the details.
So I think that there will be epigenetic drugs coming out of this burst of effort, but I don't think that they're going to exactly be the most rationally designed things we've ever seen. That's fine - we'll take drug candidates where we can get them. But as for when we're actually going to understand all these gene regulation pathways, well. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Drug Development
January 17, 2012
There are small drug firms and there are small drug firms - if you know what I mean. Which category is Warp Drive Bio going to fall into?
If you've never heard of them - and that name is rather memorable - then don't worry, they're new. Its founders are big names on the industry/academic drug discovery border: Greg Verdine, Jim Wells, and George Church. Here's the rundown:
Warp Drive Bio is driving the reemergence of natural products in the era of genomics to create breakthrough treatments that make an important difference in the lives of patients. Built upon the belief that nature is the world's most powerful medicinal chemist, Warp Drive Bio is deploying a battery of state-of-the-art technologies to access powerful drugs that are now hidden within microbes. Key to the Warp Drive Bio approach is the company's proprietary "genomic search engine" and customized search queries that enable hidden natural products to be revealed on the basis of their distinctive genomic signature.
Interestingly, they launched with a deal with Sanofi already in place. I've been hearing about cryptic natural products for a while, and while I haven't seen anything that's knocked me over, it's not prima facie a crazy idea. But it is going to be a tricky one to get to work, I'd think. After all, if these natural products were so active and useful, might they not have a bit higher profile, genomically and metabolically? I'm willing to be convinced otherwise by some data; perhaps we'll see some as the Sanofi collaboration goes on. Anyone with more knowledge in this area, please add it in the comments - maybe we can all learn something.
One other question: with Verdine founding another high-profile company, does this say something about how his last one, Aileron, is doing in the "stapled peptide" business? Or not?
+ TrackBacks (0) | Category: Biological News | Business and Markets
January 4, 2012
The topic of whether stem-cell therapies are overhyped - OK, let me show my cards, the topic of just how overhyped they are - last came up around here in November, when Geron announced that they were getting out of the business. And yesterday had a good example of why people tend to hold their noses and fan away the fumes whenever a company press-releases something in this area.
I'm talking about Osiris Therapeutics, who have been working for some time on a possible stem cell therapy (called Prochymal) for Type I diabetes. That's certainly not a crazy idea, although it is an ambitious one - after all, you get Type I when your insulin-producing cells die off, so why not replace them? Mind you, we're not quite sure why your insulin-producing cells die off in the first place, so there's room to wonder if the newly grown replacements, if they could be induced to exist, might not suffer a similar fate. But that's medical research, and we're not going to figure these things out without trying them.
This latest work, though, does not look fit to advance anyone's understanding of diabetes or of stem cells, although it might help advance ones understanding of human nature and of the less attractive parts of the stock market. Osiris, you see, issued a press release yesterday (courtesy of FierceBiotech) on the one-year interim analysis of their trial. The short form: they have nothing so far. The release goes on for a bit about how well-tolerated the stem-cell therapy is, but unfortunately, one reason for that clean profile might be that nothing is happening at all. No disease markers for diabetes have improved, although they say that there is a trend towards fewer hypoglycemic events. (I think it's irresponsible to talk about "trends" of this sort in a press release, but such a policy would leave many companies without much to talk about at all).
It's only when you look at Osiris and their history that you really start to understand what's going on. You see, this isn't Prochymal's first spin around the track. As Adam Feuerstein has been chronicling, the company has tried this stem cell preparation against a number of other conditions, and it's basically shown the same thing every time: no adverse effects, and no real positive ones, either. Graft-versus-host disease, cardiac events, cartilage repair, Crohn's disease - nothing happens, except press releases. You'd never know anything about this history if you just came across the latest one, though. The company's web site isn't a lot of help, either: you'd think that Prochymal is advancing on all fronts, when (from what I can see) it's not going much of anywhere.
So if you're looking for a reason to hold on to your wallet when the phrase "stem cell therapy" comes up, look no further. The thing is, some stem cell ideas are eventually going to work - you'd think - and when they do, they're going to be very interesting indeed. You'd think. But are any of the real successes going to come out of fishing expeditions like this? You don't want your clinical research program to be so hard to distinguish from a dose-and-hope-and-sell-some-stock strategy - do you?
+ TrackBacks (0) | Category: Biological News | Business and Markets
November 17, 2011
Just how different is one brain cell from another? I mean, every cell in our body has the same genome, so the differences in type (various neurons, glial cells) must be due to expression during development. And the differences between individual members of a class must be all due to local environment and growth - right?
Maybe not. I wasn't aware of this myself, but there's a growing body of evidence that suggests that neurons might actually differ more at the genomic level than you'd imagine. A lot of this work has come from the McConnell lab at the Salk Institute, where they've been showing that mouse precursor cells can develop into neurons with various chromosomal changes along the way. And instead of a defect (or an experimental artifact), he's hypothesized that this is a normal feature that helps to form the huge neuronal diversity seen in brain tissue.
His latest work used induced pluripotent cells transformed into neurons. Taking these cells from two different people, he found that the resulting neurons had highly variable sequences, with all sorts of insertions, deletions, and transpositions. (The precursor cells had some, too, but different ones, suggesting that the neural cell changes happened along the way). And this recent paper suggests that neurons have an unusual number of transposons in their DNA, which fits right in with McConnell's results.
The implication is that human brains are mosaics of mosaics, at the cell and sequence levels. And that immediately makes you wonder if these processes are involved in disease states (hard to imagine how they wouldn't be). The problem is, it's not too easy to get ahold of well-matched and well-controlled human brain tissue samples to check these ideas. But that's the obvious next step - take several similar-looking neurons and sequence them all the way. Obvious, but very difficult: single-cell sequencing is not so easy, to start with, and how exactly do you grab those single neurons out of the tangle of nerve tissue to sequence them? Someone's going to do this, but it's going to be a chore. (Note: McConnell's group was able to do the pluripotent-cell-derived stuff a bit more easily, since those come out clonal and give you more to work with).
Now, the idea that neurons are taking advantage of chromosomal instability to this degree is a little unnerving. That's because when you think of chromosomal instability, you think of cancer cells (See also the link in that last paragraph. It's interesting, as an aside, to see that those last two are to posts from this blog in 2002 - next year will mark ten years of this stuff! And I also enjoy seeing my remark from back then about "With headlines like this, I can't think why I'm not pulling in thousands of hits a day", since these days I'm running close to 20K/day as it is).
So, on some level, are our brains akin to tumor tissue? You really wonder why brain cancer isn't more common than it is, if these theories are correct. There may well be ways to get "controlled chromosomal instability", though, as opposed to the wild-and-woolly kind, but even the controlled kind is a bit scary. And all this makes me think of a passage from an old science fiction story by James Blish, "This Earth of Hours". The Earthmen have encountered a bizarre civilization that seems to involve many of the star systems toward the interior of the galaxy, and a captured human has informed them that these aliens apparently have no brains per se:
"No brains," the man from the Assam Dragon insisted. "Just lots of ganglia. I gather that's the way all of the races of the Central Empire are organized, regardless of other physical differences. That's what they mean when they say we're all sick - hadn't you realized that?"
"No," 12-Upjohn said in slowly dawning horror. "You had better spell it out."
"Why, they say that's why we get cancer. They say that the brain is the ultimate source of all tumors, and is itself a tumor. They call it 'hostile symbiosis.' "
"In the long run. Races that develop them kill themselves off. Something to do with solar radiation; animals on planets of Population II stars develop them, Population I planets don't."
The things you pick up reading 1950s science fiction. Blish, by the way, was an odd sort. He had a biology degree, and a liking for James Joyce, Oswald Spengler, and Richard Strauss. All of these things worked their ways into his stories, which were often much better and more complex than they strictly needed to be. Here's a PDF of "This Earth of Hours", if you're interested - it's not a perfect transcription, though; you'll have to take my word for it that the original has no grammatical errors. It's a good illustration of Blish's style - what appears at first to be a pulpy space-war story turns out to have a lot of odd background dropped into it, along with speculations like the above. And for someone who didn't always write a lot of descriptive prose, preferring to let philosophical points drive his plots, I find Blish's stories strangely vivid, particularly the relatively actionless ones like "Beep" or "Common Time". He's pretty thoroughly out of print these days, but you can find the paperbacks used, and in many cases as e-books. Now if you're looking for someone who always lets philosophical points drive his stores, then you'll be wanting some Borges. (As it happens, I've had occasion to discuss that particular translation with an Argentine co-worker. But this is not a literary blog, not for the most part, so I'll stop there!)
+ TrackBacks (0) | Category: Biological News | Book Recommendations | Cancer | The Central Nervous System
November 16, 2011
It's messy inside a cell. The closer we look, the more seems to be going on. And now there's a closer look than ever at the state of proteins inside a common human cell line, and it does nothing but increase your appreciation for the whole process.
The authors have run one of these experiments that (in the days before automated mass spec techniques and huge computational power) would have been written off as a proposal from an unbalanced mind. They took cultured human U2OS cells, lysed them to release their contents, and digested those with trypsin. This gave, naturally, an extremely complex mass of smaller peptides, but these, the lot of them, were fractionated out and run through the mass spec machines, with use of ion-trapping techniques and mass-label spiking to get quantification. The whole process is reminiscent of solving a huge jigsaw puzzle by first running it through a food processor. The techniques for dealing with such massive piles of mass spec/protein sequence data, though, have improved to the point where this sort of experiment can now be carried out, although that's not to say that it isn't still a ferocious amount of work.
What did they find? These cells are expressing on the order of at least ten thousand different proteins (well above the numbers found in previous attempts at such quantification). Even with that, the authors have surely undercounted membrane-bound proteins, which weren't as available to their experimental technique, but they believe that they've gotten a pretty good read of the soluble parts. And these proteins turn out to expressed over a huge dynamic range, from a few dozen copies (or less) per cell up to tens of millions of copies.
As you'd figure, those copy numbers represent very different sorts of proteins. It appears, broadly, that signaling and regulatory functions are carried out by a host of low-expression proteins, while the basic machinery of the cell is made of hugely well-populated classes. Transcription, translation, metabolism, and transport are where most of the effort seems to be going - in fact, the most abundant proteins are there to deal with the synthesis and processing of proteins. There's a lot of overhead, in other words - it's like a rocket, in which a good part of the fuel has to be there in order to lift the fuel.
So that means that most of our favored drug targets are actually of quite low abundance - kinases, proteases, hydrolases of all sorts, receptors (most likely), and so on. We like to aim for regulatory choke points and bottlenecks, and these are just not common proteins - they don't need to be. In general (and this also makes sense) the proteins that have a large number of homologs and family members tend to show low copy numbers per variant. Ribosomal machinery, on the other hand - boy, is there a lot of ribosomal stuff. But unless it's bacterial ribosomes, that's not exactly a productive drug target, is it?
It's hard to picture what it's like inside a cell, and these numbers just make it look even stranger. What's strangest of all, perhaps, is that we can get small-molecule drugs to work under these conditions. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
November 15, 2011
Are stem cells overhyped? That topic has come up around here several times. But there have been headlines and more headlines, and breathless reports of advances, some of which might be working out, and many of which are never heard from again. (This review, just out today, attempts to separate reality from hype).
Today brings a bit of disturbing news. Geron, a company long associated with stem cell research, the company that started the first US trial of embryonic stem cell therapy, has announced that they're exiting the field. Now, a lot of of this is sheer finances. They have a couple of oncology drugs in the clinic, and they need all the cash they have to try to get them through. But still, you wonder - if their stem cell trial had been going really well, wouldn't the company have gotten a lot more favorable publicity and opportunities for financing by announcing that? As things stand, we don't know anything about the results at all; Geron is looking for someone to take over the whole program.
As it happens, there's another stem-cell report today, from a study in the Lancet of work that was just presented at the AHA. This one involves injecting heart attack patients with cultured doses of their own cardiac stem cells, and it does seem to have helped. It's a good result, done in a well-controlled study, and could lead to something very useful. But we still have to see if the gains continue, what the side effects might be, whether there's any advantage to doing this over other cell-based therapies, and so on. That'll take a while, although this looks to be on the right track. But the headlines, as usual, are way out in front of what's really happening.
No, I continue to think that stem cells are a very worthy subject of research. But years, quite a few years, are going to be needed before treatments using them can become a reality. Oh, and billions of dollars, too - let's not forget that. . .
+ TrackBacks (0) | Category: Biological News | Business and Markets | Cancer | Cardiovascular Disease | Press Coverage
October 18, 2011
Under the "Who'da thought?" category, put this news about cyclodextrin. For those outside the field, that's a ring of glucose molecules, strung end to end like a necklace. (Three-dimensionally, it's a lot more like a thick-cut onion ring - see that link for a picture). The most common form, beta-cyclodextrin, has seven glucoses. That structure gives it some interesting properties - the polar hydroxy groups are mostly around the edges and outside surface, while the inside is more friendly to less water-soluble molecules. It's a longtime additive in drug formulations for just that purpose - there are many, many examples known of molecules that fit into the middle of a cyclodextrin in aqueous solution.
But as this story at the Wall Street Journal shows, it's not inert. A group studying possible therapies for Niemann-Pick C disease (a defect in cholesterol storage and handling) was going about this the usual way - one group of animals was getting the proposed therapy, while the other was just getting the drug vehicle. But this time, the vehicle group showed equivalent improvement to the drug-treatment group.
Now, most of the time that happens when neither of them worked; that'll give you equivalence all right. But in this case, both groups showed real improvement. Further study showed that the cyclodextrin derivative used in the dosing vehicle was the active agent. And that's doubly surprising, since one of the big effects seen was on cholesterol accumulation in the central neurons of the rodents. It's hard to imagine that a molecule as big (and as polar-surfaced) as cyclodextrin could cross into the brain, but it's also hard to see how you could have these effects without that happening. It's still an open question - see that PLoS One paper link for a series of hypotheses. One way or another, this will provide a lot of leads and new understanding in this field:
Although the means by which CD exerts its beneficial effects in NPC disease are not understood, the outcome of CD treatment is clearly remarkable. It leads to delay in onset of clinical signs, a significant increase in lifespan, a reduction in cholesterol and ganglioside accumulation in neurons, reduced neurodegeneration, and normalization of markers for both autophagy and neuro-inflammation. Understanding the mechanism of action for CD will not only provide key insights into the cholesterol and GSL dysregulatory events in NPC disease and related disorders, but may also lead to a better understanding of homeostatic regulation of these molecules within normal neurons. Furthermore, elucidating the role of CD in amelioration of NPC disease will likely assist in development of new therapeutic options for this and other fatal lysosomal disorders.
Meanwhile, the key role of cholesterol in the envelope of HIV has led to the use of cyclodextrin as a possible antiretroviral. This looks like a very fortunate intersection of a wide-ranging, important biomolecule (cholesterol) with a widely studied, well-tolerated complexing agent for it (cyclodextrin). It'll be fun to watch how all this plays out. . .
+ TrackBacks (0) | Category: Biological News | Infectious Diseases | The Central Nervous System | Toxicology
September 22, 2011
As promised, today we have a look at a possible bombshell in longevity research and sirtuins. Again. This field is going to make a pretty interesting book at some point, but it's one that I'd wait a while to start writing, because the dust is hanging around pretty thickly.
Some background: in 1999, Sir2 the Guarente lab at MIT reported that Sir2 was a longevity gene in yeast. In 2001, theyextended Sir2 these results to C. elegans nematodes, lengthening their lifespan between 15 and 50% by overexpressing the gene. And in 2004, Stephen Helfand's lab at Brown reported similar results in Drosophila fruit flies. Since then, the sirtuin field has been the subject of more publications than anyone would care to count. The sirtuins are involved, it turns out, in regulating histone acetylation, which regulates gene expression, so there aren't many possible effects they might have that you can rule out. Like many longevity-associated pathways, they seem to be tied up somehow with energy homeostasis and response to nutrients, and one of the main hypotheses has been that they're somehow involved in the (by now irrefutable) life-extending effects of caloric restriction.
As an aside, you may have noticed that almost every news about something that extends life gets tied to caloric restriction somehow. There are two good reasons for that - one is, as stated, that a lot of longevity seems - reasonably enough - to be linked to metabolism, and the other one is that caloric restriction is by far the most solid of all the longevity effects that can be shown in animal models.
I'd say that the whole sirtuin story has split into two huge arguments: (1) arguments about the sirtuin genes and enzymes themselves, and (2) arguments about the compounds used to investigate them, starting with resveratrol and going through the various sirtuin activators reported by Sirtris, both before and after their (costly) acquisition by GlaxoSmithKline. That division gets a bit blurry, since it's often those compounds that have been used to try to unravel the roles of the sirtuin enzymes, but there are ways to separate the controversies.
I've followed the twists and turns of argument #2, and it has had plenty of those. It's not safe to summarize, but if I had to, I'd say that the closest thing to a current consensus is that (1) resveratrol is a completely unsuitable molecule as an example of a clean sirtuin activator, (2) the earlier literature on sirtuin activation assays is now superseded, because of some fundamental problems with the assay techniques, and (3) agreement has not been reached on what compounds are suitable sirtuin activators, and what their effects are in vivo. It's a mess, in other words.
But what about argument #1, the more fundamental one about what sirtuins are in the first place? That's what these latest results address, and boy, do they ever not clear things up. There has been persistent talk in the field that the original model-organism life extension effects were difficult to reproduce, and now two groups (those of David Gems and Linda Partridge) at University College, London (whose labs I most likely walked past last week) have re-examined these. They find, on close inspection, that they cannot reproduce them. The effects in the LG100 strain of C. elegans appear to be due to another background mutation in the dyf family, which is also known to have effects on lifespan. Another mutant strain, NL3909, shows a similar problem: its lifespan decreases on outcrossing, although the Sir2 levels remain high. A third long-lived strain, DR1786, has a duplicated section of its genome that includes Sir2, but knocking that down with RNA interference has no effect on its lifespan. Taken together, the authors say, the correlation of Sir2 with lifespan in nematodes appears to be an artifact.
How about the fruit flies? This latest paper reproduces the lifespan effects, but finds that they seem to be due to the expression system that was used to increase dSir2 levels. When the same system is used to overexpress other genes, lifespan is also increased. They then used another expression vector to crank up the fly Sir2 by over 300%, but those flies did not show an extension in lifespan, even under a range of different feeding conditions. They also went the other way, examining mutants with their sirtuin expression knocked down by a deletion in the gene. Those flies show no different response to caloric restriction, indicating that Sir2 isn't part of that effect, either - in direct contrast to the effects reported in 2004 by Helfand.
It's important to keep in mind that these aren't the first results of this kind. Others had reported problems with sirtuin effects on lifespan (or sirtuin ties to caloric restriction effects) in yeast, and as mentioned, this had been the stuff of talk in the field for some time. But now it's all out on the table, a direct challenge.
So how are the original authors taking it? Guarente, who to his credit has been right out in the spotlight throughout the whole story, has a new paper of his own, published alongside the UCL results. They partially agree, saying that there does indeed appear to be an unlinked mutation in the LG100 strain that's affecting lifespan. But they disagree that sirtuin overexpression has no effect. Instead of their earlier figure of 15 to 50%, they're claiming a 10 to 14% - not as dramatic, for sure, but the key part for the argument is that it's not zero.
And as for the fruit flies, Hefland at Brown is pointing out that in 2009, his group reported a totally different expression system to increase dSir2, which also showed longevity effects (see their Figure 2 in that link). This work, he's noting, is not cited in the new UCL paper, and from his tone in interviews, he's not too happy about that. That's leading to coverage from the "scientific feud!" angle - and it's not that I think that's inaccurate, but it's not the most important part of the story. (Another story with follow-up quotes is here).
So what are the most important parts? I'd nominate these:
1. Are sirtuins involved in lifespan extension, or not? And by that, I mean not only in model organisms, but are they subject to pharmacological intervention in the field of human aging?
2. What are the other effects of sirtuins, outside of aging? Diabetes, cancer, several other important areas touch on this whole metabolic regulation question: what are the effects of sirtuins in these?
3. What is the state of our suite of tools to answer these questions? Resveratrol may or may not do interesting things in humans or other organisms, but it's not a suitable tool compound to unravel the basic mechanisms. Do we have such compounds, from the reported Sirtris chemical matter or from other sources? And on the biology side, how useful are the reported overexpression and deletion strains of the various model organisms, and how confident are we about drawing conclusions from their behavior?
4. Getting more specific to drug discovery, are sirtuin regulator compounds drug candidates or not? Given the disarray in the basic biology, they're at the very least quite speculative. GlaxoSmithKline is the company most immediately concerned with this question, since they spent over $700 million to buy Sirtris, and have been spending money in the clinic ever since evaluating their more advanced chemical matter. And that brings up the last question. . .
5. What does GSK think of that deal now? Did they jump into an area of speculative biology too quickly? Or did they make a bold deal that put them out ahead in an important field?
I do not, of course, have answers to any of these. But the fact that we're still asking these questions ten years after the sirtuin story started tells you that this is both an important and interesting area, and a tricky one to understand.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
September 21, 2011
This will be the subject of a longer post tomorrow, but I wanted to alert people to some breaking news in the sirtuin/longevity saga. It now appears that the original 2001 report of longevity effects of Sir2 in the C. elegans model, which was the starting gun of the whole story, is largely incorrect. That would help to explain the conflicting results in this area, wouldn't it? Topics for discussion in tomorrow's post will include, but not be limited to: what else do sirtuins do? Are those results reproducible? What can we now expect to come out of pharma research in the field? And what does GSK now think about its investment in Sirtris?
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
August 2, 2011
And while we're on the topic of Merck, I note that they're closing their RNAi facility in Mission Bay, the former Sirna. That was a pretty big deal when it took place, wasn't it? The piece linked to in that earlier post also talks about the investment that Merck was making in the very facility that they're now closing down, but if I got paid every time that sort of thing happened in this industry, I wouldn't have to work.
This isn't going to help the Bay Area biotech/pharma environment, nor the atmosphere around RNA interference as a drug platform. Merck says that they're not getting out of the field, and that they've integrated the technology for use in their drug discovery efforts. But they paid a billion dollars for Sirna, which is not the sort of up-front price you generally see for add-on technologies that can help you discover other drugs. At the time, it looked like Merck was hoping directly for some new therapeutics, and we still don't know when (or if) those will emerge.
There's another player in the field right next door to me here in Cambridge, Alnylam. Not long after I last wrote about the state of the RNAi area, they actually invited me over to talk about what they're up to - a bit unusual, since I'm not just a blogger, but a scientist working at another company, which is a combo that's caused some confusion more than once. But they gave me a nice overview of what they're working on, and it was clear that they understand the risks involved and are doing whatever they can to get something that works out the door. They have several approaches to the drug-delivery problem that besets the RNA world, and are taking good shots in several different disease areas.
But they (and the other RNAi shops) need more money to go on, which in this environment means partnering with a larger company. Merck, Roche, and Novartis have (in various ways) shown that they feel as if they have pretty much all the RNAi that they need for now, so it'll have to be someone else. Maybe AZ or Lilly, the companies with the biggest patent-expiration problems?
+ TrackBacks (0) | Category: Biological News | Business and Markets
July 27, 2011
You hear often about how many marketed drugs target G-protein coupled receptors (GPCRs). And it's true, but not all GPCRs are created equal. There's a family of them (the Class B receptors) that has a number of important drug targets in it, but getting small-molecule drugs to hit them has been a real chore. There's Glucagon, CRF, GHRH, GLP-1, PACAP and plenty more, but they all recognize good-sized peptides as ligands, not friendly little small molecules. Drug-sized things have been found that affect a few of these receptors, but it has not been easy, and pretty much all of them have been antagonists. (That makes sense, because it's almost always easier to block some binding event rather than hitting the switch just the right way to turn a receptor on).
That peptide-to-receptor binding also means that we don't know nearly as much about what's going on in the receptor as we do for the small-molecule GPCRs, either (and there are still plenty of mysteries around even those). The generally accepted model is a two-step process: there's an extra section of the receptor protein that sticks out and recognizes the C-terminal end of the peptide ligand first. Once that's bound, the N-terminal part of the peptide ligand binds into the seven-transmembrane-domain part of the receptor. The first part of that process is a lot more well-worked-out than the second.
Now a German team has reported an interesting approach that might help to clear some things up. They synthesized a C-terminal peptide that was expected to bind to the extracellular domain of the CRF receptor, and made it with an azide coming off its N-terminal end. (Many of you will now have guessed where this is going!) Then they took a weak peptide agonist piece and decorated its end with an acetylene. Doing the triazole-forming "click" reaction between the two gave a nanomolar agonist for the receptor, revving up the activity of the second peptide by at least 10,000x.
This confirms the general feeling that the middle parts of the peptide ligands in this class are just spacers to hold the two business ends together in the right places. But it's a lot easier to run the "click" reaction than it is to make long peptides, so you can mix and match pieces more quickly. That's what this group did next, settling on a 12-amino-acid sequence as their starting point for the agonist peptide and running variations on it.
Out of 89 successful couplings to the carrier protein, 70 of the new combinations lowered the activity (or got rid of it completely). 15 were about the same as the original sequence, but 11 of them were actually more potent. Combining those single-point changes into "greatest-hit" sequences led to some really potent compounds, down to picomolar levels. And by that time, they found that they could get rid of the tethered carrier protein part, ending up with a nanomolar agonist peptide that only does the GPCR-binding part and bypasses the extracellular domain completely. (Interestingly, this one had five non-natural amino acid substitutions).
Now that's a surprise. Part of the generally accepted model for binding had the receptor changing shape during that first extracellular binding event, but in the case of these new peptides, that's clearly not happening. These things are acting more like the small-molecule GPCR agonists and just going directly into the receptor to do their thing. The authors suggest that this "carrier-conjugate" approach should speed up screening of new ligands for the other receptors in this category, and should be adaptable to molecules that aren't peptides at all. That would be quite interesting indeed: leave the carrier on until you have enough potency to get rid of it.
+ TrackBacks (0) | Category: Biological News | Chemical News | Drug Assays
July 6, 2011
There's been a real advance in the field of engineered "unnatural life", but it hasn't produced one-hundredth the headlines that the arsenic bacteria story did. This work is a lot more solid, although it's hard to summarize in a snappy way.
Everyone knows about the four bases of DNA (A, T, C, G). What this team has done is force bacteria to use a substitute for the T, thymine - 5-chlorouracil, which has a chlorine atom where thymine's methyl group is. From a med-chem perspective, that's a good switch. The two groups are about the same size, but they're different enough that the resulting compounds can have varying properties. And thymine is a good candidate for a swap, since it's not used in RNA, thus limiting the number of systems that have to change to accommodate the new base. (RNA, of course, uses uracil instead, the unsubstituted parent compound of both thymine and the 5-chloro derivative used here).
Over the years, chlorouracil has been studied in DNA for just that reason, and it's been found to make the proper base-pair hydrogen bonds, among other things. So incorporating it into living bacteria looks like an experiment in just the right spot - different enough to be a real challenge, but similar enough to be (probably) doable. People have taken a crack at similar experiments before, with mixed success. In the 1970s, mutant hamster cells were grown in the presence of the bromo analog, and apparently generated DNA which was strongly enriched with that unnatural base. But there were a number of other variables that complicated the experiment, and molecular biology techniques were in their infancy at the time. Then in 1992, a group tried replacing the thymine in E. coli with uracil, with multiple mutations that shut down the T-handling pathways. They got up to about 90% uracil in the DNA, but this stopped the bacteria from growing - they just seemed to be hanging on under those T-deprived conditions, but couldn't do much else. (In general, withholding thymine from bacterial cultures and other cells is a good way to kill them off).
This time, things were done in a more controlled manner. The feat was accomplished by good old evolutionary selection pressure, using an ingenious automated system. An E. coli strain was produced with several mutations in its thymine pathways to allow it to survive under near-thymine-starvation conditions. These bacteria were then grown in a chamber where their population density was being constantly measured (by turbidity). Every ten minutes a nutrient pulse went in: if the population density was above a set limit, the cells were given a fixed amount of chlorouracil solution to use. If the population had falled below a set level, the cells received a dose of thymine-containing solution to keep them alive. A key feature of the device was the use of two culture chambers, with the bacteria being periodically swapped from one to the other (which the first chamber undergoes sterilization with 5M sodium hydroxide!) That's to keep biofilm formation from giving the bacteria an escape route from the selection pressure, which is apparently just what they'll do, given the chance. One "culture machine" was set for a generation time of about two hours, and another for a 4-hour cycle (by cutting in half the nutrient amounts). This cycle selected for mutations that allowed the use of chlorouracil throughout the bacteria's biochemistry.
And that's what happened - the proportion of the chlorouracil solution that went in went up with time. The bacterial population had plenty of dramatic rises and dips, but the trend was clear. After 23 days, the experimenters cranked up the pressure - now the "rescue" solution was a lower concentration of thymine, mixed 1:1 with chlorouracil, and the other solution was a lower concentration of chlorouracil only. The proportion of the latter solution used still kept going up under these conditions as well. Both groups (the 2-hour cycle and the 4-hour cycle ones) were consuming only chlorouracil solution by the time the experiment went past 140 days or so.
Analysis of their DNA showed that it had incorporated about 90% chlorouracil in the place of thymine. The group identified a previously unknown pathway (U54 tRNA methyltransferase) that was bringing thymine back into the pathway, and disrupting this gene knocked the thymine content down to just above detection level (1.5%). Mass spec analysis of the DNA from these strains clearly showed the chlorouracil present in DNA fractions.
The resulting bacteria from each group, it turned out, could still grow on thymine, albeit with a lag time in their culture. If they were switched to thymine media and grown there, though, they could immediately make the transition back to growing on chlorouracil, which shows that their ability to do so was now coded in their genomes. (The re-thymined bacteria, by the way, could be assayed by mass spec as well for the disappearance of their chlorouracil).
These re-thymined bacteria were sequenced (since the chloruracil mutants wouldn't have matched up too well with sequencing technology!) and they showed over 1500 base substitutions. Interestingly, there were twice as many in the A-T to G-C direction as the opposite, which suggests that chlorouracil tends to mispair a bit with guanine. The four-hour-cycle strain had not only these sorts of base swaps, but also some whole chromosome rearrangements. As the authors put it, and boy are they right, "It would have been impossible to predict the genetic alterations underlying these adaptations from current biological knowledge. . ."
These bacteria are already way over to the side of all the life on Earth. But the next step would be to produce bacteria that have to live on chlorouracil and just ignore thymine. If that can be realized, the resulting organisms will be the first representatives of a new biology - no cellular life form has ever been discovered that completely switches out one of the DNA bases. These sorts of experiments open the door to organisms with expanded genetic codes, new and unnatural proteins and enzymes, and who knows what else besides. And they'll be essentially firewalled from all other living creatures.
Postscript: and yes, it's occurred to me as well that this sort of system would be a good way to evolve arsenate-using bacteria, if they do really exist. The problem (as it is with the current work) is getting truly phosphate-free media. But if you had such, and ran the experiment, I'd suggest isolating small samples along the way and starting them fresh in new apparatus, in order to keep the culture from living off the phosphate from previous generations. Trying to get rid of one organic molecule is hard enough; trying to clear out a whole element is a much harder proposition).
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Life As We (Don't) Know It
July 1, 2011
I've been meaning to link to John LaMattina's blog for some time now. He's a former R&D guy (and author of Drug Truths: Dispelling the Myths About Pharma R & D, which I reviewed here for Nature Chemistry), and he knows what he's talking about when it comes to med-chem and drug development.
Here he takes on the recent "Scientists Crack the Histamine Code" headlines that you may have seen this week. Do we have room, he wonders, for a third-generation antihistamine, or not?
+ TrackBacks (0) | Category: Biological News | Drug Industry History
June 14, 2011
We spend a lot of time thinking about proteins in this business - after all, they're the targets for almost every known drug. One of the puzzling things about them, though, is the question of just how orderly they are.
That's "order" as in "ordered structure". If you're used to seeing proteins in X-ray crystal structures, they appear quite orderly indeed, but that's an illusion. (In fact, to me, that's one of the biggest things to look out for when dealing with X-ray information - the need to remember that you're not seeing something that's built out of solid resin or metal bars. Those nice graphics are, even when they're right, just snapshots of something that can move around). Even in many X-ray studies, you can see some loops of proteins that just don't return useful electron density. They're "disordered". Sometimes, in the pictures, a structure will be put up in that region as a placeholder (and the crystallographers will tell you not to put much stock in it), and sometimes there will just be a blank region or some dotted lines. Either way, "disordered" means what it says - the protein in that region adopts and/or switches between a number of different conformations, with no clear preference for any of them.
And that makes sense for a big, floppy, loop that makes an excursion out from the ordered core of a protein. But how far can disorder extend? We have a tendency to think that the intrinsic state of a protein is a more or less orderly one, which we just refer to (if we do at all) as "folded". (You can divide that into two further classes - "properly folded" when the protein does what we want it to do, and "improperly folded" when it doesn't. There are a number of less polite synonyms for that latter state as well). Are all proteins so well folded, though?
It's becoming increasingly clear that the answer is no, they aren't. Here's a new paper in JACS that examines the crystallographic data and concludes that proteins cover the entire range, from almost completely ordered to almost completely disordered. When you consider that the more disordered ones are surely less likely to be represented in that data set, you have to conclude that there are probably a lot of them out there. Even the ones with relatively orderly regions can turn out to have important functions for their disordered parts. The study of these "intrinsically disordered proteins" (IDPs) has really taken off in the last few years. (Here's another paper on the subject that's also just out in JACS, to prove the point!)
So what's a disordered protein for? (Here's one of the key papers in the field that addresses this question). One such would have a number of conformations available to it inside a pretty small energy window, and this might permit it to have different functions, binding to rather different partners without having to do much energetically costly refolding. They could be useful for broad selectivity/low affinity situations and have faster on (or off) rates with their binding partners. (That second new JACS paper linked to above suggests that it's selection pressure on those rates that has given us so many disordered proteins in the first place). Interestingly, several of these IDPs have shown up with links to human disease, so we're going to have to deal with them somehow. Here's a recent attempt to come to grips with what structure they have; it's not an easy task. And it's not like figuring all this stuff out even for the ordered proteins is all that easy, either, but this is the world as we find it.
+ TrackBacks (0) | Category: Biological News
June 8, 2011
I haven't read it yet, but there's a new book on the whole "garage biotech" field, which I've blogged about hereand here. Biopunk looks to be a survey of the whole movement; I hope to go through it shortly.
I'm still on the "let a thousand flowers bloom" side of this issue, myself, but it's certainly not without its worries. But this is the world we've got - where these things are possible, and getting more possible all the time - and we're going to have to make the best of it. Trying to stuff it back down will, I think, only increase the proportion of harmful lunatics who try it.
By the way, since that's an Amazon link, I should note that I do get a cut from them whenever someone buys through a link on the site, and not just from the particular item ordered. I've never had a tip jar on the site, and I never plan to, but the Amazon affiliate program does provide some useful book-buying money around here at no cost to the readership.
+ TrackBacks (0) | Category: Biological News | Book Recommendations
June 2, 2011
Your genome - destiny, right? That's what some of us thought - every disease was going to have one or more associated genes, those genes would code for new drug targets, and we'd all have a great time picking them off one by one. It didn't work out that way, of course, but there are still all these papers out there in the literature, linking Gene A with the chances of getting Disease B. So how much are those worth?
While we're at it, everyone also wanted (and still wants) biomarkers of all kinds. Not just genes, but protein and metabolite levels in the blood or other tissue to predict disease risk or progression. I can't begin to estimate how much work has been going into biomarker research in this business - a good biomarker can clarify your clinical trial design, regulatory picture, and eventual marketing enormously - if you can find one. Plenty of them have been reported in the literature. How much are those worth, too?
Not a whole heck of a lot, honestly, according to a new paper in JAMA by John Ioannidis and Orestes Panagiotou. They looked at the disease marker highlights from the last 20 years or so, the 35 papers that had been cited at least 400 times. How good do the biomarkers in those papers have to be to be useful? An increase of 35% in the chance of getting the targeted condition? Sorry - only one-fifth of the them rise to that level, when you go back and see how they've held up in the real world.
Subsequent studies, in fact, very rarely show anything as strong as the original results - 29 of the 35 biomarkers show a less robust association after meta-analysis of all the follow-up reports, as compared to what was claimed at first. And those later studies tend to be larger and more powered - in only 3 cases was the highly cited study the largest one that had been run, and only twice did the largest study show a higher effect measure than the original highly cited one. Only 15 of the 35 biomarkers were nominally statistically significant in the largest studies of them.
Ioannidis has been hitting the literature's unreliability for some time now, and I think that it's hard to dispute his points. The first thought that any scientist should have when an interesting result is reported is "Great! Wonder if it's true?" There are a lot of reasons for things not to be (see that earlier post for a discussion of them), and we need to be aware of how often they operate.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
May 19, 2011
Hmm. Remember when the Nobel Prize came out for telomere research? Now there are competing companies offering telomere-length screening, and one of them (Telome Sciences) was partly founded by Elizabeth Blackburn, one of the Nobel awardees. That isn't going down well with. . .one of the other awardees:
But among the critics of such tests is Carol Greider, a molecular biologist at Johns Hopkins University, who was a co-winner of the Nobel Prize with Dr. Blackburn.
Dr. Greider acknowledged that solid evidence showed that the 1 percent of people with the shortest telomeres were at an increased risk of certain diseases, particularly bone marrow failure and pulmonary fibrosis, a fatal scarring of the lungs. But outside of that 1 percent, she said, “The science really isn’t there to tell us what the consequences are of your telomere length.”
Dr. Greider said that there was great variability in telomere length. “A given telomere length can be from a 20-year-old or a 70-year-old,” she said. “You could send me a DNA sample and I couldn’t tell you how old that person is.”
Grieder is also a former student of Blackburn's, which makes things even messier. I can see why she's uneasy. Looking over the news accounts, there's an awful lot of noise and hype - all kinds of stuff about "Test Predicts How Long You'll Live!" and so on. The hype has been building for some time, though, and I'll bet that we're nowhere near the crest. As for me, I'm not rushing out to check my telomeres until I know what that means (and until I know if there's anything I can do about it).
+ TrackBacks (0) | Category: Biological News | Business and Markets
February 18, 2011
A few years ago, I wrote here about Luca Turin and his theory that our sense of smell is at least partly responsive to vibrational spectra. (Turin himself was the subject of this book, author of this one (which is quite interesting and entertaining for organic chemists), and co-author of Perfumes: The A-Z Guide, perhaps the first attempt to comprehensively review and categorize perfumes).
Turin's theory is not meant to overturn the usual theories of smell (which depend on shape and polarity as the molecules bind into olfactory receptors), but to extend them. He believes that there are anomalies in scent that can't be explained by the current model, and has been proposing experiments to test them. Now he and his collaborators have a new paper in PNAS with some very interesting data.
They're checking to see if Drosophila (fruit flies) can tell the difference between deuterated and non-deuterated compounds. The idea here is that the size and shape of the two forms are identical; there should be no way to smell the difference. But it appears that the flies can: they discriminate, in varying ways, between deuterated forms of acetophenone, octanol, and benzaldehyde. Deuterated acetophenone, for example, turns out to be aversive to fruit flies (whereas the normal form is attractive), and the aversive quality goes up as you move from d-3 to d-5 and d-8 forms of the isotopically labeled compound.
The flies could also be trained, by a conditioned avoidance protocol, to discriminate between all of the isotopic pairs. Most interestingly, if trained to avoid a particular normal or deutero form of one compound, they responded similarly when presented with a novel pair, which seems to indicate that they pick up a "deuterated" scent effect that overlays several chemical classes.
There's more to the paper; definitely read it if you're interested in this sort of thing. Reactions to it have been all over the place, from people who sound convinced to people who aren't buying any of it. If Turin is right, though, it may indeed be true that we're smelling the differences between C-H stretching vibrations, possibly through an electron tunneling mechanism, which is a rather weird thought. But then, it's a weird world.
+ TrackBacks (0) | Category: Biological News | Chemical News
January 14, 2011
Everyone in this industry wants to have good, predictive biomarkers for human diseases. We've wanted that for a very long time, though, and in most cases, we're still waiting. [For those outside the field, a biomarker is some sort of easy-to-run test that for a factor that correlates with the course of the real disease. Viral titer for an infection or cholesterol levels for atherosclerosis are two examples. The hope is to find a simple blood test that will give you advance news of how a slow-progressing disease is responding to treatment]. Sometimes the problem is that we have markers, but that no one can quite agree on how relevant they are (and for which patients), and other times we have nothing to work with at all.
A patient's antibodies might, in theory, be a good place to look for markers in many disease states, but that's some haystack to go rooting around in. Any given person is estimated, very roughly, to produce maybe ten billion different antibodies. And in many cases, we have no idea of what ones to look for since we don't really know what abnormal molecules they've been raised to recognize. (It's a chicken-and-egg problem: if we knew what those antigens were, we'd probably just look for them directly with reagents of our own).
So if you don't have a good starting point, what to do? One approach has been to go straight into tissue samples from patients and look for unusual molecules, in the belief that these might well be associated with the disease. (You can then do just as above to try to use them as a biomarker - look for the molecules themselves, if they're easy to assay, or look for circulating antibodies that bind to them). This direct route has only become feasible in recent years, with advanced mass spec and data handling techniques, but it's still a pretty formidable challenge. (Here's a review of the field).
A new paper in Cell takes another approach. The authors figured that antigen molecules would probably look like rather weirdly modified peptides, so they generated a library of several thousand weirdo "peptoids". (These are basically poly-glycines with anomalous N-substituents). They put these together as a microarray and used them as probes against serum from animal models of disease.
Rather surprisingly, the idea seems to have worked. In a rodent model of multiple sclerosis (the EAE, or experimental autoimmune encephalitis model), they found several peptoids that pulled down antibodies from the model animals and not from the controls. A time course showed that these antibodies came on at just the speed expected for an immune response in the animal model. As a control, another set of mice were immunized with a different (non-disease-causing) protein, and a different set of peptoids pulled down those resulting antibodies, with little or no cross-reactivity.
Finally, the authors turned to a real-world case: Alzheimer's disease. They tried out their array on serum from six Alzheimer's patients, versus six age-matched controls, and six Parkinson's patients as another control, and found three peptoids that seems to have about a 3-fold window for antibodies in the AD group. Further experimentation (passing serum repeated over these peptoids before assaying) showed that two of them seem to react with the same antibody, while one of them has a completely different partner. These experiments also showed that they are indeed pulling down the same antibodies in each of the patients, which is an important thing to make sure of.
Using those three peptoids by themselves, they tried a further 16 AD patient samples, 16 negative controls, and 6 samples from patients with lupus, all blinded, and did pretty well: the lupus patients were clearly distinguished as weak binders, the AD patients all showed strong binding, and 14 out of the 16 control patients showed weak binding. Two of the controls, though, showed raised levels of antibody detection, up to the lowest of the AD patients.
So while this isn't good enough for a diagnostic yet, for a blind shot into the wild blue immunological yonder, it's pretty impressive. Although. . .there's always the possibility that this is already good enough, and that the test picked up presymptomatic Alzheimer's in those two control patients. I suppose we're going to have to wait to find that out. As you'd imagine, the authors are extending these studies to wider patient populations, trying to make the assay easier to run, and trying to find out what native antigens these antibodies might be recognizing. I wish them luck, and I hope that it turns out that the technique can be applied to other diseases as well. This should keep a lot of people usefully occupied for quite some time!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Central Nervous System
December 7, 2010
It's time to revisit the arsenic-using bacteria paper. I wrote about it on the day it came out, mainly to try to correct a lot of the poorly done reporting in the general press. These bacteria weren't another form of life, they weren't from another planet, they weren't (as found) living on arsenic (and they weren't "eating" it), and so on.
Now it's time to dig into the technical details, because it looks like the arguing over this work is coming down to analytical chemistry. Not everyone is buying the conclusion that these bacteria have incorporated arsenate into their biomolecules, with the most focused objections being found here, from Rosie Redfield at UBC.
So, what's the problem? Let's look at the actual claims of the paper and see how strong the evidence is for each of them:
Claim 1: the bacteria (GFAJ-1) grow on an arsenate-containing medium with no added phosphate. The authors say that after several transfers into higher-arsentic media, they're maintaining the bacteria in the presence of 40 mM arsenate, 10 mM glucose, and no added phosphate. But that last phrase is not quite correct, since they also say that there's about 3 micromolar phosphate present from impurities in the other salts.
So is that enough? Well, the main evidence is that (as shown in their figure 1), that if you move the bacteria to a medium that doesn't have the added arsenate (but still has the background level of phosphate) that they don't grow. With added arsenate they do, but slowly. And with added phosphate, as mentioned before, they grow more robustly. It looks to me as if the biggest variable here might be the amount of phosphate that could be contaminating the arsenate source that they use. But their table S1 shows that the low level of phosphate in the media is the same both ways, whether they've added arsenate or not. Unless something's gone wrong with that measurement, that's not the answer.
One way or another, the fact that these bacteria seem to use arsenate to grow seems hard to escape. And they're not the kind of weirdo chemotroph to be able to run off arsenate/arsenite redox chemistry (if indeed there are any bacteria that use that system at all). (The paper does get one look at arsenic oxidation states in the near-edge X-ray data, and they don't see anything that corresponds to the plus-3 species). That would appear to leave the idea that they're using arsenate per se as an ingredient in their biochemistry - otherwise, why would they start to grow in its presence? (The Redfield link above takes up this question, wondering if the bacteria are scavenging phosphorus from dead neighbor cells, and points out that the cells may actually still be growing slowly without either added arsenic or phosphate).
Claim 2: the bacteria take up arsenate from the growth medium. To check this, the authors measured intracellular arsenic by ICP mass spec. This was done several ways, and I'll look at the total dry weight values first.
Those arsenic levels were rather variable, but always run high. Looking at the supplementary data, there are some large differences between two batches of bacteria, one from June and one from July. And there's also some variability in the assay itself: the June cells show between 0.114 and 0.624% arsenic (as the assay is repeated), while the July cells show much lower (and tighter) values, between 0.009% and 0.011%. Meanwhile, the corresponding amount of phosphorus is 0.023% to 0.036% in June (As/P of 5 up to 27), and 0.011 to 0.014 in July (As/P of 0.76 to 0.97).
The paper averages these two batches of cells, but it certainly looks like the June bunch were much more robust in their uptake of arsenate. You might look at the July set and think, man, those didn't work out at all, since they actually have more phosphorus than arsenic in them. But the background state should be way lower than that. When you look at the corresponding no-arsenic cell batches, the differences are dramatic in both June and July. The June batch showed at least ten times as much phosphorus in them, and a thousand times less arsenic, and the July run of no-arsenate cells showed (compared to the July arsenic bunch) 60 times as much phosphorus and 1/10th the arsenic. The As/P ratio for both sets hovers around 0.001 to 0.002.
I'll still bet the authors were very disappointed that the July batch didn't come back as dramatic as the June ones. (And I have to give them some credit for including both batches in the paper, and not trying just to make it through with the June-bugs). One big question is what happens when you run the forced-arsenate-growth experiment more times: are the June cells typical, or some sort of weird anomaly? And do they still have both groups growing even now?
One of the points the authors make is that the arsenate-grown cells don't have enough phosphorus to survive. Rosie Redfield doesn't buy this one, and I'll defer to her expertise as a microbiologist. I'd like to hear some more views on this, because it's a potentially important. There are several possibilities - from most exciting to least:
1. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate, to the point that they can live even below the level of phosphorus needed to normally keep them alive. They probably still need a certain core amount of phosphate, though. This is the position of the paper's authors.
2. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate. But they still have an amount of phosphate present that would keep them going, so the arsenate must be in "non-critical" biochemical spots - basically, the ones that can stand having it. (This sounds believable, but we still have to explain the growth in the presence of arsenate).
3. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate. This arsenate, though, is sequestered somehow and is not substituting for phosphate in the organisms' biochemistry. (In this case, you'd wonder why the bacteria are taking up arsenate at all, if they're just having to ditch it. Perhaps they can't pump it out efficiently enough?) And again, we'd have to explain the growth in the presence of arsenate - for a situation like this, you'd think that it would hurt, rather than help, by imposing an extra metabolic burden. I'm assuming here, for the sake of argument, that the whole grows-in-the-presence-of-arsenate story is correct.
Claim 3: the bacteria incorporate arsenate into their DNA as a replacement for phosphate. This is an attempt to distinguish between the possibilities just listed. I think that authors chose the bacterial DNA because DNA has plenty of phosphate, is present in large quantities and can be isolated by known procedures (as opposed to lots of squirrely little phosphorylated small molecules), and would be a dramatic example of arsenate incorporation. These experiments were done by giving the bacteria radiolabeled arsenate, and looking for its distribution.
Rosie Redfield has a number of criticisms of the way the authors isolated the DNA in these experiments, and again, since I'm not a microbiologist, I'll stand back and let that argument take place without getting involved. It's worth noting, though, that most (80%) of the label was in the phenol fraction of the initial extraction, which should have proteins and smaller-molecular-weight stuff in it. Very little showed up in the chloroform fraction (where the lipids would be), and most of the rest (11%) was in the final aqueous layer, where the nucleic acids should accumulate. Of course, if (water-soluble) arsenate was just hanging around, and not being incorporated into biomolecules, the distribution of the label might be pretty similar.
I think a very interesting experiment would be to take non-arsenate-grown GFAJ-1 bacteria, make pellets out of them as was done in this procedure, and then add straight radioactive arsenate to that mixture, in roughly the amounts seen in the arsenate-grown bacteria. How does the label distribute then, as the extractions go on?
Here we come to one of my biggest problems with the paper, after a close reading. When you look at the Supplementary Material, Table S1, you see that the phenol extract (where most of the label was), hardly shows any difference in total arsenic amounts, no matter if the cells were grown high arsenate/no phosphorus or high phosphorus/no arsenate. The first group is just barely higher than the second, and probably within error bars, anyway.
That makes me wonder what's going on - if these cells are taking up arsenate (and especially if they grow on it), why don't we see more of it in the phenol fraction, compared to bacteria that aren't exposed to it at all? Recall that when arsenic was measured by dry weight, there was a real difference. Somewhere there has to be a fraction that shows a shift, and if it's not in the place where 80% of the radiolabel goes, then were could that be?
I think that the authors would like to say "It's in the DNA", but I don't see that data as supporting enough of a change in the arsenic levels. In fact, although they do show some arsenate in purified DNA, the initial DNA/RNA extract from the two groups (high As/no P and no As/high P) shows more arsenic in the bacteria that weren't getting arsenic at all. (These are the top two lines in Table S1 continued, top of page 11 in the Supplementary Information). The arsenate-in-the-DNA conclusion of this paper is, to my mind, absolutely the weakest part of the whole thing.
Conclusion: All in all, I'm very interested in these experiments, but I'm now only partly convinced. So what do the authors need to shore things up? As a chemist, I'm going to ask for more chemical evidence. I'd like to see some mass spec work done on cellular extracts, comparing the high-arsenic and no-arsenic groups. Can we see evidence of arsenate-for-phosphate in the molecular weights? If DNA was good enough to purify with arsenate still on it, how about the proteome? There are a number of ways to look that over by mass-spec techniques, and this really needs to be done.
Can any of the putative arsenate-containing species can be purified by LC? LC/mass spec data would be very strong evidence indeed. I'd recommend that the authors look into this as soon as possible, since this could address biomolecules of all sizes. I would assume that X-ray crystallography data on any of these would be a long shot, but if the LC purification works, it might be possible to get enough to try. It would certainly shut everyone up!
Update: this seems like the backlash day. Nature News has a piece up, which partially quotes from this article Carl Zimmer over at Slate.
+ TrackBacks (0) | Category: Biological News | General Scientific News
November 17, 2010
So Roche is (as long rumored) going through with a 6% headcount reduction, worldwide. That's bad news, but not unexpected bad news, and it certainly doesn't make them stand out from the rest of big pharma. This sort of headline has been relentlessly applicable for several years now.
What surprised me was their announcement that they're giving up on RNA interference as a drug mechanism. That's the biggest vote of no-confidence yet for RNAi, which has been a subject of great interest (and a lot of breathless hype) for some years now. (There's been a lot of discussion around here about the balance between those two).
That's not the sort of news that the smaller companies in this space needed. Alnylam, considered the leader in the field, already had over $300 million from Roche (back in 2007), but so much for anything more. The company is already putting on a brave face. It has not been a good fall season: they were already having to cut back after Novartis recently thanked them for their five-year deal, shook their hand, and left. To be sure, Novartis said that they're going to continue to develop the targets from the collaboration, and would pay milestones to Alnylam as any of them progress - but they apparently didn't feel as if they needed Alnylam around while they did so.
Then there's Tekmira, who had a deal with Roche for nanoparticle RNAi delivery. They're out with a statement this morning, too, saying (correctly) that they have other deals which are still alive. But there's no way around the fact that this is bad news.
What we don't know is what's going on in the other large companies (the Mercks, Pfizers, and so on) who have been helping to fund a lot of this work. Are they wondering what in the world Roche is up to? Looking at it as a market opportunity, and glad to see less competition? Or wishing that they could do the same thing?
+ TrackBacks (0) | Category: Biological News | Business and Markets
November 12, 2010
Back in January, I wrote about the controversial "Reactome" paper that had appeared in Science. This is the one that claimed to have immobilized over 1600 different kinds of biomolecules onto nanoparticles, and then used chemical means to set off a fluorescence assay when any protein recognized them. When actual organic chemists got a look at their scheme - something that apparently never happened during the review process - flags went up. As shown in that January post (and all over the chemical blogging world), the actual reactions looked, well, otherwordly.
Science was already backtracking within the first couple of months, and back in the summer, an institutional committee recommended that it be withdrawn. Since then, people have been waiting for the thunk of another shoe dropping, and now it's landed: the entire paper has been retracted. (More at C&E News). The lead author, though, tells Nature that other people have been using his methods, as described, and that he's still going to clear everything up.
I'm not sure how that's going to happen, but I'll be interested to see the attempt being made. The organic chemistry in the original paper was truly weird (and truly unworkable), and the whole concept of being able to whip up some complicated reactions schemes in the presence of a huge number of varied (and unprotected) molecules didn't make sense. The whole thing sounded like a particularly arrogant molecular biologist's idea of how synthetic chemistry should work: do it like a real biologist does! Sweeping boldly across the protein landscape, you just make them all work at the same time - haven't you chemists every heard of microarrays? Of proteomics? Why won't you people get with the times?
And the sorts of things that do work in modern biology would almost make you believe in that approach, until you look closely. Modern biology depends, though, on a wonderful legacy, a set of incredible tools bequeathed to us by billions of years of the most brutal product-development cycles imaginable (work or quite literally die). Organic chemistry, though, had no Aladdin's cave of enzymes and exquisitely adapted chemistries to stumble into. We've had to work everything out ourselves. And although we've gotten pretty good at it, the actions of something like RNA polymerase still look like the works of angels in comparison.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
November 8, 2010
Here's an excellent background article on epigenetics, especially good for getting up to speed if you haven't had the opportunity to think about what gene transcription must really be like down on a molecular level.
This also fits in well with some of the obituaries that I and others have written for the turn-of-the-millennium genomics frenzy. There is, in short, an awful lot more to things than just the raw genetic code. And as time goes on, the whole the-code-is-destiny attitude that was so pervasive ten years ago (the air hasn't completely cleared yet) is looking more and more mistaken.
+ TrackBacks (0) | Category: Biological News
November 3, 2010
This article is getting the "cure for the common cold" push in a number of newspaper headlines and blog posts. I'm always alert for those, because, as a medicinal chemist, I can tell you that finding a c-for-the-c-c is actually very hard. So how does this one look?
I'd say that this falls into the "interesting discovery, confused reporting" category, which is a broad one. The Cambridge team whose work is getting all the press has actually found something that's very much worth knowing: that antibodies actually work inside human cells. Turns out that when antibody-tagged viral particles are taken up into cells, they mark the viruses for destruction in the proteosome, an organelle that's been accurately compared to an industrial crushing machine at a recycling center. No one knew this up until now - the thought had been that once a virus succeeds in entering the cell, that the game was pretty much up. But now we know that there is a last line of defense.
Some of the press coverage makes it sound as if this is some new process, a trick that cells have now been taught to perform. But the point is that they've been doing it all along (at least to nonenveloped viruses with antibodies on them), and that we've just now caught on. Unfortunately, that means that all our viral epidemics take place in the face of this mechanism (although they'd presumably be even worse without it). So where does this "cure for the common cold" stuff come in?
That looks like confusion over the mechanism to me. Let's go to the real paper, which is open-access in PNAS. The key protein in this process has been identified as tripartite-motif 21 (TRIM21), which recognized immunoglobin G and binds (extremely tightly, sub-nanomolar) to antibodies. This same group identified this protein a few years ago, and found that it's highly conserved across many species, and binds an antibody region that never changes - strong clues that it's up to something important.
Another region of TRIM21 suggested what that might be. It has a domain that's associated with ubiquitin ligase activity, and tagging something inside the cell with ubiquitin is like slapping a waste-disposal tag on it. Ubiquinated proteins tend to either get consumed where they stand or dragged off to the proteosome. And sure enough, a compound that's known to inhibit the action of the proteosome also wiped out the TRIM21-based activity. A number of other tests (for levels of ubiquitination, localization within the cell, and so on) all point in the same direction, so this looks pretty solid.
But how do you turn this into a therapy, then? The newspaper articles have suggested it as a nasal spray, which raises some interesting questions. (Giving it orally is a nonstarter, I'd think: with rare exceptions, we tend to just digest every protein that gets into the gut, so all a TRIM21 pill would do is provide you with a tiny (and expensive) protein supplement). Remember, this is an intracellular mechanism; there's presumably not much of a role for TRIM21 outside the cell. Would a virus/antibody/TRIM21 complex even get inside the cell to be degraded? On the other hand, if that kept the virus from even entering the cell, that would be an effective therapy all its own, albeit through a different mechanism than ever intended.
But hold on: there must be some reason why this mechanism doesn't always work perfectly - otherwise, no nonenveloped virus would have much of a chance. My guess is that the TRIM21 pathway is pretty efficient, but that enough viral particles miss getting labeled by antibodies to keep it from always triggering. If that's true, then TRIM21 isn't the limiting factor here - it's antibody response. If that's true, then it could be tough to rev up this pathway.
Still, these are early days. I'm very happy to see this work, because it shows us (again) how much we don't know about some very important cellular processes. Until this week, no one ever realized that there was such a thing as an intracellular antibody response. What else don't we know?
+ TrackBacks (0) | Category: Biological News | Infectious Diseases
November 1, 2010
There seems to be some disagreement within the US government on the patentability of human genes. The Department of Justice filed an amicus brief (PDF) in the Myriad Genetics case involving the BRCA genes, saying that it believes that genes are products of nature, and therefore unpatentable.
But this goes opposite to the current practice of the US Patent and Trademark Office, which does indeed grant such patents. No lawyers from the PTO appear on the brief, which may be a significant clue as to how they feel about this. And at any rate, gene patentability is going to be worked out in the courts, rather than by any sort of statement from any particular agency, which takes us back to the Myriad case. . .
+ TrackBacks (0) | Category: Biological News | Patents and IP
October 21, 2010
There's a headline I've never written before, for sure. A new paper in PNAS describes an assay in nematodes to look for compounds that have an effect on nerve regeneration. That means that you have to damage neurons first, naturally, and doing that on something as small (and as active) as a nematode is not trivial.
The authors (a team from MIT) used microfluidic chips to direct single nematodes into a small chamber where they're held down briefly by a membrane. Then an operator picks out one of its neurons on an imagining screen, whereupon a laser beam cuts it. The nematode is then released into a culture well, where it's exposed to some small molecule to see what effect that has on the neuron's regrowth. It takes about 20 seconds to process a single C. elegans, in case you're wondering, and I can imagine that after a while you'd wish that they weren't streaming along quite so relentlessly.
The group tried about 100 bioactive molecules, targeting a range of known pathways, to see what might speed up or slow down nerve regeneration. As it happens, the highest hit rates were among the kinase inhibitors and compounds targeting cytoskeletal processes. (By contrast, nothing affecting vesicle trafficking or histone deacetylase activity showed any effect). The most significant hit was an old friend to kinase researchers, staurosporine. Interestingly, this effect was only seen on particular subtypes of neurons, suggesting that they weren't picking up some sort of broad-spectrum regeneration pathway.
The paper acknowledges that staurosporine has a number of different activities, but treats it largely as a PKC inhibitor. I'm not sure that that's a good idea, personally - I'd be suspicious of pinning any specific activity to that compound without an awful lot of follow-up, because it's a real Claymore mine when it comes to kinases. The MIT group did check to see if caspases (and apoptotic pathways in general) were involved, since those are well-known effects of staurosporine treatment, and they seem to have ruled those out. And they also followed up with some other PKC inhibitors, chelerythrine and Gö 6983, and these showed similar effects.
So they may be right about this being a PKC pathway, but that's a tough one to nail down. (And even if you do, there are plenty of PKC isoforms doing different things, but there aren't enough selective ligands known to unravel all those yet). Chelerythrine inhibits alanine aminotransferase, has had some doubts expressed about it before in PKC work, and also binds to DNA, which may be responsible for some of its activity in cells. Gö 6983 seems to be a better tool, but it's is in the same broad chemical class as staurosporine itself, so as a medicinal chemist I still find myself giving it the fishy eye.
This is very interesting work, nonetheless, and it's the sort of thing that no one's been able to do before. I'm a big fan of using the most complex systems you can to assay compounds, and living nematodes are a good spot to be in. I'd be quite interested in a broader screen of small molecules, but 20 seconds per nematode surgery is still too slow for the sort of thing a medicinal chemist like me would like to run - a diversity set of, say, ten or twenty thousand compounds, for starters. And there's always the problem we were talking about here the other day, about how easy it is to get compounds into nematodes at all. I wonder if there were some false negatives in this screen just because the critters had no exposure?
+ TrackBacks (0) | Category: Biological News | Drug Assays | The Central Nervous System
October 7, 2010
Nature has a good report and accompanying editorial on garage biotechnology, which I wrote about earlier this year.
. . .Would-be 'biohackers' around the world are setting up labs in their garages, closets and kitchens — from professional scientists keeping a side project at home to individuals who have never used a pipette before. They buy used lab equipment online, convert webcams into US$10 microscopes and incubate tubes of genetically engineered Escherichia coli in their armpits. (It's cheaper than shelling out $100 or more on a 37 °C incubator.) Some share protocols and ideas in open forums. Others prefer to keep their labs under wraps, concerned that authorities will take one look at the gear in their garages and label them as bioterrorists.
For now, most members of the do-it-yourself, or DIY, biology community are hobbyists, rigging up cheap equipment and tackling projects that — although not exactly pushing the boundaries of molecular biology — are creative proof of the hacker principle. . .
The article is correct when it says that a lot of what's been written about the subject is hype. But not all of it is. I continue to think that as equipment becomes cheaper and more capable, which is happening constantly, that more and more areas of research will move into the "garage-capable" category. Biology is suited to this sort of thing, because there are such huge swaths of it that aren't well understood, and there are always more experiments to be set up than anyone can run.
And it's encouraging to see that the FBI isn't coming down hard on these people, but rather trying to stay in touch with them and learn about the field. Considering where and how some of the largest tech companies in the US started out, I would not want to discourage curious and motivated people from exploring new technologies on their own - just the opposite. Scientific research is most definitely not a members-only club; anyone who thinks that they have an interesting idea should come on down. So while I do worry about the occasional maniac misanthrope, I think I'm willing to take the chance. And besides, the only way we're going to be able to deal with the lunatics is through better technology of our own.
+ TrackBacks (0) | Category: Biological News | Who Discovers and Why
October 6, 2010
I mentioned directed evolution of enzymes the other day as an example of chemical biology that’s really having an industrial impact. A recent paper in Science from groups at Merck and Codexis really highlights this. The story they tell had been presented at conferences, and had impressed plenty of listeners, so it’s good to have it all in print.
It centers on a reaction that’s used to produce the diabetes therapy Januvia (sitagliptin). There’s a key chiral amine in the molecule, which had been produced by asymmetric hydrogenation of an enamine. On scale, though, that’s not such a great reaction. Hydrogenation itself isn’t the biggest problem, although if you could ditch a pressurized hydrogen step for something that can’t explode, that would be a plus. No, the real problem was that the selectivity wasn’t quite what it should be, and the downstream material was contaminated with traces of rhodium from the catalyst.
So they looked at using a transaminase enzyme instead. That’s a good idea, because transaminases are one of those enzyme classes that do something that we organic chemists generally can’t usually do very well – in this case, change a ketone to a chiral amino group in one step. (It takes another amine and oxidizes that on the other side of the reaction). We’ve got chiral reductions of imines and enamines, true, but those almost always need a lot of fiddling around for catalysts and conditions (and, as in this case, can cause their own problems even when they work). And going straight to a primary amine can be, in any case, one of the more difficult transformations. Ammonia itself isn’t too reactive, and you don’t have much of a steric handle to work with.
But transaminases have their idiosyncracies (all enzymes do). They generally only will accept methyl ketones as substrates, and that’s what these folks found when they screened all the commercially available enzymes. Looking over the structure (well, a homology model of the structure) of one of these (ATA-117), which would be expected to give the right stereochemistry if it could be made to give anything whatsoever, gave some clues. There’s a large binding pocket on one side of the ketone, which still wasn’t quite large enough for the sitagliptin intermediate, and a small site on the other side, which definitely wasn’t going to take much more than a methyl group.
They went after the large binding pocket first. A less bulky version of the desired substrate (which had been turned, for now, into a methyl ketone) showed only 4% conversion with the starting enzymes. Mutating the various amino acids that looked important for large-pocket binding gave some hope. Changing a serine to phenylalanine, for example, cranked up the activity by 11-fold. The other four positions were, as the paper said, “subjected to saturation mutagenesis”, and they also produced a combinatorial library of 216 multi-mutant variations.
Therein lies a tale. Think about the numbers here: according to the supplementary material for the paper, they varied twelve residues in the large binding pocket, with (say) twenty amino acid possibilities per. So you’ve got 240 enzyme variants to make and test. Not fun, but it’s doable if you really want to. But if you’re going to cover all the multi-mutant space, that’s twenty to the 12th, or over four quadrillion enzyme candidates. That’s not going to happen with any technology that I can easily picture right now. And you’re going to want to sample this space, because enzyme amino acid residues most certainly do affect each other. Note, too, that we haven’t even discussed the small pocket, which is going to have to be mutated, too .
So there’s got to be some way to cut this problem down to size, and that (to my mind) is one of the things that Codexis is selling. They didn’t, for example, get a darn thing out of the single-point-mutation experiments. But one member of a library of 216 multi-mutant enzymes showed the first activity toward the real sitagliptin ketone precursor. This one had three changes in the small pocket and that one P-for-S in the large, and identifying where to start looking for these is truly the hard part. It appears to have been done through first ruling out the things that were least likely to work at any given residue, followed by an awful lot of computational docking.
It’s not like they had the Wonder Enzyme just yet, although just getting anything to happen at all must have been quite a reason to celebrate. If you loaded two grams/liter of ketone, and put in enzyme at 10 grams/liter (yep, ten grams per liter, holy cow), you got a whopping 0.7% conversion in 24 hours. But as tiny as that is, it’s a huge step up from flat zero.
Next up was a program of several rounds of directed evolution. All the variants that had shown something useful were taken through a round of changes at other residues, and the best of these combinations were taken on further. That statement, while true, gives you no feel at all for what this stuff is like, though. There are passages like this in the experimental details:
At this point in evolution, numerous library strategies were employed and as beneficial mutations were identified they were added into combinatorial libraries. The entire binding pocket was subjected to saturation mutagenesis in round 3. At position 69, mutations TAS and C were improved over G. This is interesting in two aspects. First, V69A was an option in the small pocket combinatorial library, but was less beneficial than V69G. Second, G69T was improved (and found to be the most beneficial in the next
round) suggesting that something other than sterics is involved at this position as it was a Val in the starting enzyme. At position 137, Thr was found to be preferred over Ile. Random mutagenesis generated two of the mutations in the round 3 variant: S8P and G215C. S8P was shown to increase expression and G215C is a surface exposed mutation which may be important for stability. Mutations identified from homologous enzymes identified M94I in the dimer interface as a beneficial mutation. In subsequent rounds of evolution the same library strategies were repeated and expanded. Saturation mutagenesis of the secondary sphere identified L61Y, also at the dimer interface, as being beneficial. The repeated saturation mutagenesis of 136 and 137 identified Y136F and T137E as being improved.
There, that wasn’t so easy, was it? This should give you some idea of what it’s like to engineer an enzyme, and what it’s like to go up against a billion years of random mutation. And that’s just the beginning – they ended up doing ten rounds of mutations, and had to backtrack some along the way when some things that looked good turned out to dead-end later on. Changes were taken on to further rounds not only on the basis of increased turnover, but for improved temperature and pH stability, tolerance to DMSO co-solvent, and so on. They ended up, over the entire process, screening a total of 36,480 variations, which is a hell of a lot, but is absolutely infinitesmal compared to the total number of possibilities. Narrowing that down to something feasible is, as I say, what Codexis is selling here.
And what came out the other end? Well, recall that the known enzymes all had zero activity, so it’s kind of hard to calculate improvement from that. Comparing to the first mutant that showed anything at all, they ended up with something that was about 27,000 times better. This has 27 mutations from the original known enzyme, so it’s a rather different beast. The final enzyme runs in DMSO/water, at loadings up of to 250g/liter of starting material at 3 weight per cent enzyme loading, and turns isopropylamine into acetone while it’s converting the prositagliptin ketone to product. It is completely stereoselective (they’ve never seen the other amine), and needless to say involves no hydrogen tanks and furnishes material that is not laced with rhodium metal.
This is impressive stuff. You'll note, though, the rather large amount of grunt work that had to go into it, although keep in mind, the potential amount of grunt work would be more than the output of the entire human race. To date. Just for laughs, an exhaustive mutational analysis of twenty-seven positions would give you 1.3 times ten to the thirty-fifth possibilities to screen, and that's if you know already which twenty-seven positions you're going to want to look at. One microgram of each of them would give you the mass of about a hundred Earths, not counting the vials. Not happening.
Also note that this is the sort of thing that would only be done industrially, in an applied research project. Think about it: why else would anyone go to this amount of trouble? The principle would have been proven a lot earlier in the process, and the improvements even part of the way through still would have been startling enough to get your work published in any journal in the world and all your grants renewed. Academically, you'd have to be out of your mind to carry things to this extreme. But Merck needs to make sitagliptin, and needs a better way to do that, and is willing to pay a lot of money to accomplish that goal. This is the kind of research that can get done in this industry. More of this, please!
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | Drug Development
September 23, 2010
I agree with many of the commenters around here that one of the most interesting and productive research frontiers in organic chemistry is where it runs into molecular biology. There are so many extraordinary tools that have been left lying around for us by billions of years of evolution; not picking them up and using them would be crazy.
Naturally enough, the first uses have been direct biological applications - mutating genes and their associated proteins (and then splicing them into living systems), techniques for purification, detection, and amplification of biomolecules. That's what these tools do, anyway, so applying them like this isn't much of a shift (which is one reason why so many of these have been able to work so well). But there's no reason not to push things further and find our own uses for the machinery.
Chemists have been working on that for quite a while. We look at enzymes and realize that these are the catalysts that we really want: fast, efficient, selective, working at room temperature under benign conditions. If you want molecular-level nanotechnology (not quite down to atomic!), then enzymes are it. The ways that they manipulate their substrates are the stuff of synthetic organic daydreams: hold down the damn molecule so it stays in one spot, activate that one functional group because you know right where it is and make it do what you want.
All sorts of synthetic enzyme attempts have been made over the years, with varying degrees of success. None of them have really approached the biological ideals, though. And in the "if you can't beat 'em, join 'em" category, a lot of work has gone into modifying existing enzymes to change their substrate preferences, product distributions, robustness, and turnover. This isn't easy. We know the broad features that make enzymes so powerful - or we think we do - but the real details of how they work, the whole story, often isn't easy to grasp. Right, that oxyanion hole is important: but just exactly how does it change the energy profile of the reaction? How much of the rate enhancement is due to entropic factors, and how much to enthalpic ones? Is lowering the energy of the transition state the key, or is it also a subtle raising of the energy of the starting material? What energetic prices are paid (and earned back) by the conformational changes the protein goes through during the catalytic cycle? There's a lot going on in there, and each enzyme avails itself of these effects differently. If it weren't such a versatile toolbox, the tools themselves wouldn't come out being so darn versatile.
There's a very interesting paper that's recently come on on this sort of thing, to which I'll devote a post by itself. But there are other biological frontiers beside enzymes. The machinery to manipulate DNA is exquisite stuff, for example. For quite a while, it wasn't clear how we organic chemists could hijack it for our own uses - after all, we don't spend a heck of a lot of time making DNA. But over the years, the technique of adding DNA segments onto small molecules and thus getting access to tools like PCR has been refined. There are a number of applications here, and I'd like to highlight some of those as well.
Then you have things like aptamers and other recognition technologies. These are, at heart, ways to try to recapitulate the selective binding that antibodies are capable of. All sorts of synthetic-antibody schemes have been proposed - from manipulating the native immune processes themselves, to making huge random libraries of biomolecules and zeroing in on the potent ones (aptamers) to completely synthetic polymer creations. There's a lot happening in this field, too, and the applications to analytical chemistry and purification technology are clear. This stuff starts to merge with the synthetic enzyme field after a point, too, and as we understand more about enzyme mechanisms that process looks to continue.
So those are three big areas where molecular biology and synthetic chemistry are starting to merge. There are others - I haven't even touched here on in vivo reactions and activity-based proteomics, for example, which is great stuff. I want to highlight these things in some upcoming posts, both because the research itself is fascinating, and because it helps to show that our field is nowhere near played out. There's a lot to know; there's a lot to do.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical News | General Scientific News | Life As We (Don't) Know It
August 26, 2010
The Vinca alkaloids are some of the most famous chemotherapy drugs around - vincristine and vinblastine, the two most widely used, are probably shown in every single introduction to natural products chemistry that's been written in the past fifty years. But making them synthetically is a bear, and extracting them from the plant is a low-yielding pain.
A new paper in PNAS shows that there's still a lot that we don't know about these compounds. What has been known for a long time is that they're derived from two precursor alkaloids, vindoline and catharanthine. This new work shows that the plants deliberately keep those two compounds separated from each other, which helps account for the low yield of the final compounds.
As it turns out, if you dip the leaves in chloroform, which dissolves the waxy coating from the surface, you find that basically all the catharanthine is found there. At the same time, even soaking the leaves in chloroform for as long as an hour hardly extracts any vindoline - it's sequestered away inside the cells of the leaves. The enzymes responsible for biosynthesis are probably also in different locations (or cell types), and there are unknown transport mechanisms involved as well. This is the first time anyone's found such a secreted alkaloid mechanism.
Why does Vinca go to all the trouble? For one thing, catharanthine is a defense against insect pests, and it also seems to inhibit attack by fungal spores. And what the vindoline is doing, I'm not sure - but the plant probably has a good reason to keep it away from the cantharanthine, because producing too much vincristine, vinblastine, etc. would probably kill off its dividing cells, the same way it works in chemotherapy.
The authors suggest that people should start looking around to see if other plants have similar secretion mechanisms. And this makes me wonder if this could be a way to harvest natural products - do the plants survive after having their leaves dipped in solvent? If they do, do they then re-secrete more natural waxes to catch up? I'm imagining a line of plants, growing in pots on some sort of conveyor line, flipping upside down for a quick wash-and-shake through a trough of chloroform, and heading back into the greenhouse. . .but then, I have a vivid imagination. . .
+ TrackBacks (0) | Category: Biological News | Natural Products
August 18, 2010
News like today's gamma-secretase failure makes me want to come down even harder on stuff like this. Ray Kurzweil, whom I've written about before, seems to be making ever-more-optimistic predictions with ever-more-shortened timelines. This time, he's saying that reverse-engineering the human brain may be about a decade away.
I hope he's been misquoted, or that I'm not understanding him correctly. But some of his other statements from this same talk make me wonder:
Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
This is hand-waving, and at a speed compatible with powered flight. It would be much less of a leap to say that the Oxford English Dictionary and a grammar textbook are sufficient to write the plays that Shakespeare didn't get around to. And while I don't believe that the brain is a designed artifact like The Tempest (or Tempest II: The Revenge of Caliban), I do most certainly believe that it is an object whose details will keep us busy for more than ten years.
Saying that its entire design is in the genome is deeply silly, mistaken, and misleading. The information in the genome takes advantage of so much downstream processing and complexity in a way that no computer program ever has, and that makes comparing it to lines of code laughable. I mean, lines of code have basically one level of reality to them: they're instructions to deal with data. But the genomic code is a set of instructions to make another set of instructions (RNA), which tells how to make another even more complex pile of multifunctional tools (proteins), which go on to do a bewildering variety of other things. And each of these can feed back on themselves, co-operate with and modulate the others in real time, and so on. Billions of years of relentless pressure (work well, or die) have shaped every intricate detail. The result makes the most complex human designs look like toys.
So here I am, absolutely stunned and delighted when I can make tiny bits of this machinery alter their course in a way that doesn't make the rest of it fall to pieces - a feat that takes years of unrelenting labor and hundreds of millions of dollars. And Ray Kurzweil is telling me that it's all just code. And not that much code, either. Have it broken down soon we will, no sweat. Sure.
I see that PZ Myers has come to the same conclusion. I don't see how anyone who's ever worked in molecular biology, physiology, cell biology, or medicinal chemistry could fail to, honestly. . .
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
August 9, 2010
David Baker's lab at the University of Washington has been working on several approaches to protein structure problems. I mentioned Rosetta@home here, and now the team has published an interesting paper on another one of their efforts, FoldIt.
That one, instead of being a large-scale passive computation effort, is more of an active process - in fact, it's active enough that it's designed as a game:
We hypothesized that human spatial reasoning could improve both the sampling of conformational space and the determination of when to pursue suboptimal conformations if the stochastic elements of the search were replaced with human decision making while retaining the deterministic Rosetta algorithms as user tools. We developed a multiplayer online game, Foldit, with the goal of producing accurate protein structure models through gameplay. Improperly folded protein conformations are posted online as puzzles for a fixed amount of time, during which players interactively reshape them in the direction they believe will lead to the highest score (the negative of the Rosetta energy). The player’s current status is shown, along with a leader board of other players, and groups of players working together, competing in the same puzzle.
So how's it working out? Pretty well, actually. It turns out that human players are willing to do more extensive rearrangements to the protein chains in the quest for lower energies than computational algorithms are. They're also better at evaluating which positions to start from. Both of these remind me of the differences between human chess play and machine play, as I understand them, and probably for quite similar reasons. Baker's team is trying to adapt the automated software to use some of the human-style approaches, when feasible.
There are several dozen participants who clearly seem to have done better in finding low-energy structures than the rest of the crowd. Interestingly, they're mostly not employed in the field, with "Business/Financial/Legal" making up the largest self-declared group in a wide range of fairly even-distributed categories. Compared to the "everyone who's played" set, the biggest difference is that there are far fewer students in the high-end group, proportionally. That group of better problem solvers also tends to be slightly more female (although both groups are still mostly men), definitely older (that loss of students again), and less well-stocked with college graduates and PhDs. Make of that what you will.
Their conclusion is worth thinking about, too:
The solution of challenging structure prediction problems by Foldit players demonstrates the considerable potential of a hybrid human–computer optimization framework in the form of a massively multiplayer game. The approach should be readily extendable to related problems, such as protein design and other scientific domains where human three-dimensional structural problem solving can be used. Our results indicate that scientific advancement is possible if even a small fraction of the energy that goes into playing computer games can be channelled into scientific discovery.
That's crossed my mind, too. In my more pessimistic moments, I've imagined the human race gradually entertaining itself to death, or at least to stasis, as our options for doing so become more and more compelling. (Reading Infinite Jest a few years ago probably exacerbated such thinking). Perhaps this is one way out of that problem. I'm not sure that it's possible to make a game compelling enough when it's hooked up to some sort of useful gear train, but it's worth a try.
+ TrackBacks (0) | Category: Biological News | In Silico | Who Discovers and Why
July 29, 2010
Craig Venter has never been a person to keep a lot of things bottled up inside him. But check out this interview with Der Speigel for even more candor than usual. For instance:
SPIEGEL: Some scientist don't rule out a belief in God. Francis Collins, for example …
Venter: … That's his issue to reconcile, not mine. For me, it's either faith or science - you can't have both.
SPIEGEL: So you don't consider Collins to be a true scientist?
Venter: Let's just say he's a government administrator.
There's more where that came from. The title is "We Have Learned Nothing From the Genome", and it just goes right on from there. Well worth a look.
+ TrackBacks (0) | Category: Biological News
July 7, 2010
Time to revisit the chronic fatigue/XMRV controversy, because it's become even crazier. To catch up, there was a 2009 report in Science that this little-known virus correlated strongly with patients showing the clinical syndrome. Criticism was immediate, with several technical comments and rebuttals coming out in the journal. Then researchers from the UK and Holland strongly challenged the original paper's data and said that they could not reproduce anything like it.
Recently I (and a lot of other people who write about science) received an e-mail claiming that a paper was about to come out from a group at the NIH that confirmed the first report. I let that one go by, since I thought I'd wait for, you know, the actual paper (for one thing, that would let me be sure that there really was one). Now Science reports that yes, there is such a manuscript. But. . .
Science has learned that a paper describing the new findings, already accepted by the Proceedings of the National Academy of Sciences (PNAS), has been put on hold because it directly contradicts another as-yet-unpublished study by a third government agency, the U.S. Centers for Disease Control and Prevention (CDC). That paper, a retrovirus scientist says, has been submitted to Retrovirology and is also on hold; it fails to find a link between the xenotropic murine leukemia virus-related virus (XMRV) and CFS. The contradiction has caused "nervousness" both at PNAS and among senior officials within the Department of Health and Human Services, of which all three agencies are part, says one scientist with inside knowledge.
I'll bet it has! It looks like the positive findings are from Harvey Alter at NIH, and the negative ones are from William Switzer at the CDC. Having two separate government labs blatantly contradict each other - simultaneously, yet - is what everyone seems to be trying to avoid. Sounds to me like each lab is going to have to try the other's protocols before this one gets ironed out.
I wouldn't be expecting either paper to appear any time soon, if that's the case.
Update: Well, as it turns out, the Retrovirology paper has been published - so what's holding up PNAS? Might as well get them both out so everyone can compare. . .
+ TrackBacks (0) | Category: Biological News | Infectious Diseases
June 29, 2010
Now, this could get quite interesting. A recent paper in PNAS talks about "downsizing" biologically active proteins to much shorter mimics of the alpha-helical parts of their structures. These show a good deal more stability than the parents, and show a sometimes startling amount of biological activity.
The building block for all this is the smallest helical peptide yet reported, a cyclic pentapeptide (KAAAD) curled as as a lactam between residues 1 and 5. Joining two or more of these up give you more turns, and replacing the alanines gives you plenty of possible mimics of endogenous proteins. An analog of nociceptin turned out to be the most potent agonist at ORL-1 ever described (40 picomolar), and an analog of RSV fusion protein is, in its turn, the most potent inhibitor of that viral fusion process ever found as well.
Meanwhile, the paper states that these constrained peptides were stable in human serum for over 24 hours, as very much opposed to their uncyclized counterparts, which are degraded rapidly. (Exocyclic amino acids, when present, do get degraded off in a time span of hours, though).
I'm quite amazed by all this, and I'm still processing it myself. I'll let the authors have the last word for now:
"This work is a blueprint for design and utility of constrained α-helices that can substitute for α-helical protein sequences as short as five amino acids. . .The promising conformational and chemical stability suggests many diverse applications in biology as molecular probes, drugs, diagnostics, and possibly even vaccines. The constrained peptides herein offer similar binding affinity and/or function as the proteins from which they were derived, with the same amino acid sequences that confer specificity, while retaining stability and solubility akin to small molecule therapeutics. . ."
+ TrackBacks (0) | Category: Biological News
May 20, 2010
As had been widely expected, Craig Venter's team has announced the production of an organism with a synthetic genome. All the DNA in these new mycoplasma cells was made first on synthesizer machines (in roughly 6 KB stretches), then assembled first enzymatically and finally in yeast into working chromosomes.
And we know that they work, because they then transplanted them into mycoplasma and ended up with a new species. The cells grow normally, with the same morphology as wild-type, and sequencing them shows only the synthetic genome - which, interestingly, has several "watermark" sequences imbedded in it, a practice that this team strongly recommends future researchers in this area follow. In this case, there's a coded version of the names of the team members, a URL, and an e-mail address if you manage to decipher things.
Nothing about this process was trivial - the team apparently worked for months on just the last genomic transplantation step until things finally lined up right. But there's been a lot learned by this effort, and the next ones will be easier. I'm not sure if I call this a synthetic organism or not, since the cytoplasm (and all its machinery) was already there. But whatever it is, it sure has a synthetic genome, designed on a screen and built by machine. And it works, and more will surely follow. Will 2010, looking back, be the year that things changed?
+ TrackBacks (0) | Category: Biological News
April 30, 2010
Many readers will have heard of Rosetta@Home. It's a distributed-computing approach to protein folding problems, which is certainly an area that can absorb all the floating-point operations you can throw at it. It's run from David Baker's lab at the University of Washington, and has users all over the world contributing.
A reader sends along news that recently the project seems to have come across a good hit in one of their areas, proteins designed to bind to the surface of influenza viruses. It looks like they have one with tight binding to an area of the virus associated with cell entry, so the next step will be to see if this actually prevents viral infection in a cell assay.
At that point, though, I have to step in as a medicinal chemist and ask what the next step after that could be. It won't be easy to turn that into any sort of therapy, as Prof. Baker makes clear himself:
Being able to rapidly design proteins which bind to and neutralize viruses and other pathogens would definitely be a significant step towards being able to control future epidemics. However, in itself it is not a complete solution because there is a problem in making enough of the designed proteins to give to people--each person would need a lot of protein and there are lots of people!
We are also working on designing new vaccines, but the flu virus binder is not a vaccine, it is a virus blocker. Vaccines work by mimicking the virus so your body makes antibodies in advance that can then neutralize the virus if you get infected later. the designed protein, if you had enough of it, should block the flu virus from getting into your cells after you had been exposed; a vaccine cannot do this.
One additional problem is that the designed protein may elicit an antibody response from people who are treated with it. in this case, it could be a one time treatment but not used chronically.
The immune response is definitely a concern, but that phrase "If you had enough of it" is probably the big sticking point. Most proteins don't fare so well when dosed systemically, and infectious disease therapies are notorious for needing whopping blood levels to be effective. At the same time, there's Fuzeon (enfuvirtide), a good-sized peptide drug (26 amino acids) against HIV cell entry. It was no picnic to develop, and its manufacturing was such an undertaking that it may have changed the whole industry, but it is out there.
My guess is that Rosetta@Home is more likely to make a contribution to our knowledge of protein folding, which could be broadly useful. More specifically, I'd think that vaccine design would be a more specific place that the project could come up with something of clinical interest. These sorts of proteins, though, probably have the lowest probability of success. The best I can see coming out of them is more insight into protein-protein interfaces - which is not trivial, for sure, but it's not the next thing to an active drug, either.
+ TrackBacks (0) | Category: Biological News | Drug Development | Infectious Diseases
April 29, 2010
In keeping with the problem discussed here ("sticky containers"), there's a report that a lot of common spectrometric DNA assays may have been affected by leaching of various absorbing contaminants from plastic labware. If the published work is shown relative to control tubes, things should be (roughly) OK, but if not, well. . .who knows? Especially if the experiments were done using the less expensive tubes, which seem to be more prone to emitting gunk.
We take containers for granted in most lab situations, but we really shouldn't. Everything - all the plastics, all the types of glass, all the metals - is capable of causing trouble under some conditions. And it tends to sneak up on us when it happens. (Of course, there are more, well, noticeable problems with plastics in the organic chemistry lab, but that's another story. Watch out for the flying cork rings!)
+ TrackBacks (0) | Category: Biological News | Life in the Drug Labs
Here's something I never knew: odors can regulate lifespan. Well, in fruit flies, anyway - a group at Baylor published results in 2007 showing that exposure to food-derived odors (yeast smells, in the case of Drosophila) partially cancels out the longevity-inducing effects of caloric restriction. Normally fed flies showed no effect.
That 2007 paper identified a specific sensory receptor (Or83b) as modulating the effect of odor on lifespan. Now comes a report that another receptor has been tracked down in this case, the G-protein coupled Gr63a. Flies missing this particular olfactory GPCR no longer show the lifespan sensitivity to yeast odors. This narrows things down. Or83b mutations seem to broadly affect sensory response in flies, but this is a much more specific receptor, just one of a great many similar ones:
"Unlike previous reports involving more general olfactory manipulations, extended longevity via loss of Gr63a occurs through a mechanism that is likely independent of dietary restriction. We do, however, find that Gr63a is required for odorants from live yeast to affect longevity, suggesting that with respect to lifespan, CO2 is an active component of this complex odor. Because Gr63a is expressed in a highly specific population of CO2-sensing neurons (the ab1C neurons) that innervate a single glomerulus in the antennal lobe (the V glomerulus), these data implicate a specific sensory cue and its associated neurosensory circuit as having the ability to modulate fly lifespan and alter organismal stress response and physiology. Our results set the stage for the dissection of more complex neurosensory and neuroendocrine circuits that modulate aging in Drosophila. . ."
It's going to be very interesting to follow that neuronal pathway - I've no idea where it will lead, but we're bound to learn something worthwhile. To make a wild generalization straight up to humans, this makes me wonder about people who are practicing caloric restriction on themselves - they're still exposed to food odors all the time, right? Does the same reversal apply? For me, I think that the scent of barbecue and fried catfish might be enough to do it right there, but keep in mind that I'm from Arkansas. Your mileage may vary.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
April 28, 2010
I wrote here some time ago about human cells actually making their own morphine - real morphine, the kind that everyone thought was only produced in poppy plants. Now there's a paper in PNAS where various deuterium-labeled precursors of morphine were dosed in rats, and in each case they converted it to the next step in the known biosynthesis. The yields were small, since each compound was metabolically degraded as well, but it appears that rats are capable of all steps of a morphine synthesis from at least the isoquinoline compound tetrahydropapaveroline (THP).
And that's pretty interesting, because it's also been established that rats have small THP in their brains and other tissues - as do humans. And humans, it appears, almost always have trace amounts of morphine in the urine - which leads one to think that our bodies may well, in fact, be making it themselves.
Why that's happening is quite another question, and where the THP comes from is another one. Working under the assumption that all this machinery is not just there for the heck of it, you also wonder if this system could be the source of one or more drug targets (I spoke about that possibility here). What you probably don't want to assume is that these targets would necessarily have to do with pain. We still don't know if there's room to work in here. But it's worth thinking about, if (for no other reason) to remind ourselves that there are plenty of things going on inside the human body that we don't understand at all.
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
April 27, 2010
I've said several times that I think that mass spectrometry is taking over the analytical world, and there's more evidence of that in Angewandte Chemie. A group at Justus Liebig University in Giessen has built what has to be the finest imaging mass spec I've ever seen. It's a MALDI-type machine, which means that a small laser beam does the work of zapping ions off the surface of the sample. But this one has better spatial resolution than anything reported so far, and they've hooked it up to a very nice mass spec system on the back end. The combination looks to me like something that could totally change the way people do histology.
For the non-specialist readers in the audience, mass spec is a tremendous workhorse of analytical chemistry. Basically, you use any of a whole range of techniques (lasers, beams of ions, electric charges, etc.) to blast individual molecules (or their broken parts!) down through a chamber and determine how heavy each one is. Because molecular weights are so precise, this lets you identify a lot of molecules by both their whole weights - their "molecular ions" - and by their various fragments. Imagine some sort of crazy disassembler machine that rips things - household electronic gear, for example - up into pieces and weighs every chunk, occasionally letting a whole untouched unit through. You'd see the readouts and say "Ah-hah! Big one! That was a plasma TV, nothing else is up in that weight range. . .let's see, that mix of parts coming off it means that it must have been a Phillips model so-and-so; they always break up like that, and this one has the heavier speakers on it." But mass spec isn't so wasteful, fortunately: it doesn't take much sample, since there are such gigantic numbers of molecules in anything large enough to see or weigh.
Take a look at this image. That's a section of a mouse pituitary gland - on the right is a standard toluidine-blue stain, and on the left is the same tissue slice as imaged (before staining) by the mass spec. The green and blue colors are two different mass peaks (826.5723 and 848.5566, respectively), which correspond to different types of phospholipid from the cell membranes. (For more on such profiling, see here). The red corresponds to a mass peak for the hormone vasopressin. Note that the difference in phospholipid peaks completely shows the difference between the two lobes of the gland (and also shows an unnamed zone of tissue around the posterior lobe, which you can barely pick up in the stained preparation). The vasopressin is right where it's supposed to be, in the center of the posterior lobe.
One of the most interesting things about this technique is that you don't have to know any biomarkers up front. The mass spec blasts away at each pixel's worth of data in the tissue sample and collects whatever pile of varied molecular-weight fragments that it can collect. Then the operator is free to choose ions that show useful contrasts and patterns (I can imagine software algorithms that would do the job for you - pick two parts of an image and have the machine search for whatever differentiates them). For instance, it's not at all clear (yet) why those two different phospholipid ions do such a good job at differentiating out the pituitary lobes - what particular phospholipids they correspond to, why the different tissues have this different profile, and so on. But they do, clearly, and you can use that to your advantage.
As this technique catches on, I expect to see large databases of mass-based "contrast settings" develop as histologists find particularly useful readouts. (Another nice feature is that one can go back to previously collected data and re-process for whatever interesting things are discovered later on). And each of these suggests a line of research all its own, to understand why the contrast exists in the first place.
The second image shows ductal carcinoma in situ. On the left is an optical image, and about all you can say is that the darker tissue is the carcinoma. The right-hand image is colored by green (mass of 529.3998) and red (mass of 896.6006), which correspond to healthy and cancerous tissue, respectively (and again, we don't know why, yet). But look closely and you can see that some of the dark tissue in the optical image doesn't actually appear to be cancer - and some of dark spots in the lighter tissue are indeed small red cells of trouble. We may be able to use this technology to diagnose cancer subtypes more accurately than ever before - the next step will be to try this on a number of samples from different patients to see how much these markers vary. I also wonder if it's possible to go back to stored tissue samples and try to correlate mass-based markers with the known clinical outcomes and sensitivities to various therapies.
I'd also be interested in knowing if this technique is sensitive enough to find small-molecule drugs after dosing. Could we end up doing pharmacokinetic measurements on a histology-slide scale? Ex vivo, could we possibly see uptake of our compounds once they're applied to a layer of cells in tissue culture? Oh, mass spec imaging has always been a favorite of mine, and seeing this level of resolution just brings on dozens of potential ideas. I've always had a fondness for label-free detection techniques, and for methods that don't require you to know too much about the system before being able to collect useful data. We'll be hearing a lot more about this, for sure.
Update: I should note that drug imaging has certainly been accomplished through mass spec, although it's often been quite the pain in the rear. It's clearly a technology that's coming on, though.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Drug Assays
April 8, 2010
A very weird news item: multicellular organisms that appear to be able to live without oxygen. They're part of the little-known (and only recently codified) phylum Loricifera, and these particular organisms were collected at the bottom of the Mediterranean, in a cold, anoxic, hypersaline environment.
They have no mitochondria - after all, they don't have any oxygen to work with. Instead, they have what look like hydrogenosome organelles, producing hydrogen gas and ATP from pyruvate. I'm not sure how large an organism you can run off that sort of power source, since it looks like you only get one ATP per pyruvate (as opposed to two via the Krebs cycle), but the upper limit has just been pushed past a significant point.
+ TrackBacks (0) | Category: Biological News | General Scientific News | Life As We (Don't) Know It
For people who've done work on metabolic disease, this paper in PNAS may come as a surprise, although there was a similar warning in January of this year. Acetyl CoA-carboxylase 2 (ACC2) has been seen for some years as a target in that area. It produces malonyl CoA, which is a very important intermediate and signaling molecule in fatty acid metabolism (and other places as well). A number of drug companies have taken a crack at getting good chemical matter (I'm no stranger to it myself, actually). A lot of the interest was sparked by reports of the gene knockout mice, which seem to have healthy appetites but put on no weight. The underlying reason was thought to be that fatty acid oxidation had been turned up in their muscle and adipose tissue - and a new way to burn off excess lipids sounded like something that a lot of people with excess weight and/or dyslipidemia might be able to use. What's more, the ACC2 knockout mice also seemed to be protected from developing insulin resistance, the key metabolic problem in type II diabetes. An ACC2 inhibitor sounds like just the thing.
Well, this latest paper sows confusion all over that hypothesis. The authors report having made some selective ACC2 knockout mouse strains of their own. If the gene is inactivated only in muscle tissue, the animals show no differences at all in body weight, composition, or food intake compared to control mice. What's more, when they went back and inactivated ACC2 in the whole animal, they found the same no-effect result, whether the animals were fed on standard chow or a high-fat diet. The muscle tissue in both cases showed no sign of elevated fatty acid oxidation. The authors state drily that "The limited impact of Acc2 deletion on energy balance raises the possibility that selective pharmacological inhibition of Acc2 for the treatment of obesity may be ineffective."
Yes, yes, it does. There's always the possibility that some sort of compensating mechanism kicked in as the knockout animals developed, something that might not be available if you just stepped into an adult animal with an inhibiting drug. That's always the nagging doubt when you see no effect in a knockout mouse. But considering that those numerous earlier reports of knockout mice showed all kinds of interesting effects, you have to wonder just what the heck is going on here.
Well, the authors of the present paper are wondering the same thing, as are, no doubt, the authors of that January Cell Metabolism work. They saw no differences in their knockout animals, either, which started the rethinking of this whole area. (To add to the confusion, those authors reported seeing real differences in fatty acid oxidation in the muscle tissue of their animals, even though the big phenotypic changes couldn't be replicated). Phrases like "In stark contrast to previously published data. . ." make their appearance in this latest paper.
The authors do suggest one possible graceful way out. The original ACC2 knockout mice were produced somewhat differently, using a method that could have left production of a mutated ACC2 protein intact (without its catalytic domain). They suggest that this could possibly have some sort of dominant-negative effect. If there's some important protein-protein interaction that was wiped out in the latest work, but left intact in the original report, that might explain things - and if that's the case, then there still might be room for a small molecule inhibitor to work. But it's a long shot.
The earlier results originated from the lab of Salih Wakil at Baylor (who filed a patent on the animals), and he's still very much active in the area. One co-author, Gerry Shulman at Yale, actually spans both reports of ACC2 knockout mice - he was in on one of the Wakil papers, and on this one, too. His lab is very well known in diabetes and metabolic research, and while I'd very much like to hear his take on this whole affair, I doubt if we're going to see that in public.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity
April 5, 2010
Last summer a paper was published (PDF) showing rapamycin dosing appeared to lengthen lifespan in mice. (In that second link, I went more into the background of rapamycin and TOR signaling, for those who are interested). Now comes word that it also seems to prevent cognitive deficits in a mouse model of Alzheimer's.
The PDAPP mice have a mutation in their amyloid precursor protein associated with early-onset familiar Alzheimer's in humans, and it's a model that's been used for some years now in the field. It's not perfect, but it's not something you can ignore, either, and the effects of rapamycin treatment do seem to be significant. (The paper uses the same dose that was found to extend lifespan). The hypothesis is that rapamycin allowed increase autophagy (protein digestion) to take place in the brain, helping to clear out amyloid plaques.
What I also found interesting, though, was the rapamycin-fed non-transgenic control animals. In each case, they seem to show a trend for increased performance in the various memory tests, although they don't quite reach significance. This makes me wonder what the effects in humans might be, Alzheimer's or not. After that lifespan report last year, it wouldn't surprise me to find out that some people are taking the stuff anyway, but it's not going to be anywhere near enough of a controlled setting for us to learn anything.
This report is definitely going to start a lot of people thinking about experimenting with rapamycin for Alzheimer's - there are a lot of desperate patients and relatives out there. But together with that lifespan paper, it might also start some people thinking about it whether they're worried about Alzheimer's or not.
+ TrackBacks (0) | Category: Aging and Lifespan | Alzheimer's Disease | Biological News
What to make of the case of Becky McClain? She's a former Pfizer scientist who sued the company, claiming that she had been injured by exposure to engineered biological materials at work. She's just won her case in court, although Pfizer may well appeal the verdict. It's important to note that her most damaging claim, that the company engaged in willful misconduct, was thrown out at the beginning. The jury found that Pfizer had violated whistleblower laws and wrongfully terminated McClain as an employee.
But what I'd most like to know is whether the claim at the core of her case is true, and I don't think anyone knows that yet. McClain says that she was exposed to embryonic stem cells and to various engineered lentiviruses (due to poor lab technique on the part of co-workers, if I'm following the story correctly), and that this gave her a chronic, debilitating condition that has led to intermittent paralysis. More specifically, the theory that I've seen her legal team floating is that the lentivirus caused her tissues to express a new potassium channel, and that she has improved after taking "massive doses" of potassium. (Query: how massive are we talking here?).
Now, that's a potentially alarming thing. But that should also be potentially subject to scientific proof. This trial didn't address any of these issues, and McClain has been unable to get any traction with the court system or with OSHA on these claims. Looking around the internet, you find that some people are convinced that this is a cover-up, but (having seen OSHA in action) I'm more likely to think that if you can't get them to bite, then you probably don't have much for them to get their teeth into. I also note that the symptoms that have been described in this case are similar to many that have been ascribed in the past to psychosomatic illness. I can't say that that's what's going on here, of course, but it does complicate the issue.
The other problem I have is that such human illness from a biotech viral vector is actually a very rare event, with every case that I can think of being a deliberate attempt at gene therapy. Industry scientists don't work with human-infectious viruses without good cause, but there's still an awful lot of work that goes on with agents that most certainly can infect people (hepatitis and so on). And although I'm sure that there have been cases (accidental needle sticks and the like), I don't know of any research infections with wild-type viruses, much less engineered ones.
Well, we may yet hear more about this, and I'll rethink the issue if more information becomes available. But for now, I have to say, whatever the other issues in the case, I'm inclined to doubt the engineered-viral-infection part of this story.
+ TrackBacks (0) | Category: Biological News
April 1, 2010
We're all going to be hearing a lot about nanoparticles in the next few years (some may feel as if they've already heard quite enough, but there's nothing to be done about that). The recent report of preliminary siRNA results using them as a delivery system will keep things moving along with even more interest. So it's worth checking out this new paper, which illustrates how we're going to have to think about these things.
The authors show that it's not necessarily the carefully applied coat proteins of these nanoparticles that are the first thing a cell notices. Rather, it's the second sphere of endogenous proteins that end up associated with the particle, which apparently can be rather specific and persistent. The authors make their case with admirable understatement:
The idea that the cell sees the material surface itself must now be re-examined. In some specific cases the cell receptor may have a higher preference for the bare particle surface, but the time scale for corona unbinding illustrated here would still typically be expected to exceed that over which other processes (such as nonspecific uptake) have occurred. Thus, for most cases it is more likely that the biologically relevant unit is not the particle, but a nano-object of specified size, shape, and protein corona structure. The biological consequences of this may not be simple.
Update: fixed this post by finally adding the link to the paper!
+ TrackBacks (0) | Category: Biological News | Pharmacokinetics
March 30, 2010
Another promising Phase II oncology idea goes into the trench in Phase III: GenVec has been working on a gene-therapy approach ("TNFerade") to induce TNF-alpha expression in tumors. That's not a crazy idea, by any means, although (as with all attempts at gene therapy) getting it to work is extremely tricky.
And so it has proved in this case. It's been a long, hard process finding that out, too. Over the years, the company has looked at TNFerade for metastatic melanoma, soft tissue sarcoma, and other cancers. They announced positive data back in 2001, and had some more encouraging news on pancreatic cancer in 2006 (here's the ASCO abstract on that one). But last night, the company announced that an interim review of the Phase III trial data showed that the therapy was not going to make any endpoint, and the trial was discontinued. Reports are that TNFerade is being abandoned entirely.
This is bad news, of course. I'd very much like gene therapy to turn into a workable mode of treatment, and I'd very much like for people with advanced pancreatic cancer to have something to turn to. (It's truly one of the worst diagnoses in oncology, with a five-year survival rate of around 5%). A lot of new therapeutic ideas have come up short against this disease, and as of yesterday, we can add another one to the list. And we can add another Promising in Phase II / Nothing in Phase III drug to the list, too, the second one this week. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials
March 25, 2010
In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.
Nature has a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.
In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.
A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.
And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).
It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.
But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.
+ TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials | Pharmacokinetics
March 19, 2010
Here's the sort of thing we'll be seeing more and more of - on the whole, I think it's a good development, but it's certainly possible that one's mileage could vary:
Ginkgo’s BioBrick Assembly Kit includes the reagents for constructing BioBrick parts, which are nucleic acid sequences that encode a specific biological function and adhere to the BioBrick assembly standard. The kit, which includes the instructions for putting those parts together, sells for $235 through the New England BioLabs, an Ipswich, MA-based supplier of reagents for the life sciences industry.
Shetty didn’t release any specific sales figures for the kit, but said its users include students, researchers, and industrial companies. The kit was also intended to be used in the International Genetically Engineered Machine competition (iGEM), in Cambridge, MA. The undergraduate contest, co-launched by Knight, challenges students teams to use the biological parts to build systems and operate them in living cells.
+ TrackBacks (0) | Category: Biological News
March 17, 2010
A small company called BioTime has gotten a lot of attention in the last couple of days after a press release about cellular aging. To give you an idea of the company's language, here's a quote:
"Normal human cells were induced to reverse both the "clock" of differentiation (the process by which an embryonic stem cell becomes the many specialized differentiated cell types of the body), and the "clock" of cellular aging (telomere length)," BioTime reports. "As a result, aged differentiated cells became young stem cells capable of regeneration."
Hey, that sounds good to me. But when I read their paper in the journal Regenerative Medicine, it seems to be interesting work that's a long way from application. Briefly - and since I Am Not a Cell Biologist, it's going to be brief - what they're looking at is telomere length in various stem cell lines. Telomere length is famously correlated with cellular aging - below a certain length, senescence sets in and the cells don't divide any more.
What's become clear is that a number of "induced pluripotent" cell lines have rather short telomeres as compared to their embryonic stem cell counterparts. You can't just wave a wand and get back the whole embryonic phenotype; their odometers still show a lot of wear. The BioTime people induced in such cells a number of genes thought to help extend and maintain telomeres, in an attempt to roll things back. And they did have some success - but only by brute force.
The exact cocktail of genes you'd want to induce is still very much in doubt, for one thing. And in the cell line that they studied, five of their attempts quickly shed telomere length back to the starting levels. One of them, though, for reasons that are completely unclear, maintained a healthy telomere length over many cell divisions. So this, while a very interesting result, is still only that. It took place in one particular cell line, in ways that (so far) can't be controlled or predicted, and the practical differences between this one clone and other similar cells lines still aren't clear (although you'd certainly expect some). It's worthwhile early-stage research, absolutely - but not, to my mind, worth this.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Business and Markets
March 15, 2010
There have been complaints that something is going wrong in the publication of stem cell research. This isn't my field, so I don't have a lot of inside knowledge to share, but there appear to have been a number of researchers charging that journals (and their reviewers) are favoring some research teams over others:
The journal editor decides to publish the research paper usually when the majority of reviewers are satisfied. But professors Lovell-Badge and Smith believe that increasingly some reviewers are sending back negative comments or asking for unnecessary experiments to be carried out for spurious reasons.
In some cases they say it is being done simply to delay or stop the publication of the research so that the reviewers or their close colleagues can be the first to have their own research published.
"It's hard to believe except you know it's happened to you that papers have been held up for months and months by reviewers asking for experiments that are not fair or relevant," Professor Smith said.
You hear these sorts of complaints a lot - everyone who's had a paper turned down by a high-profile journal is a potential customer for the idea that there's some sort of backroom dealing going on for the others who've gotten in. But just because such accusations are thrown around frequently doesn't mean that they're never true. I hate to bring the topic up again, but the "Climategate" leaks illustrate just how this sort of thing can be done. Groups of researchers really can try to keep competing work from being published. I just don't know if it's happening in the stem cell field or not.
+ TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature
March 12, 2010
The discoverer of the prostate-specific antigen (Richard Ablin) has a most interesting Op-Ed in the New York Times. He's pointing out what people should already know: that using PSA as a screen for prostate cancer is not only useless, but actually harmful.
The numbers just aren't there, and Ablin is right to call it a "hugely expensive public health disaster". Some readers will recall the discussion here of a potential Alzheimer's test, which illustrates some of the problems that diagnostic screens can have. But that was for a case where a test seemed as if it might be fairly accurate (just not accurate enough). In the case of PSA, the link between the test and the disease hardly exists at all, at least for the general population. The test appears to have very little use in detecting prostate cancer, and early detection itself is notoriously unreliable as a predictor of outcomes in this disease.
The last time I had blood work done, I made a point of telling the nurse that she could check the PSA box if she wanted to, but I would pay no attention to the results. (I'd already come across Donald Berry's views on the test, and he's someone whose word I trust on biostatistics). I'd urge other male readers to do the same.
+ TrackBacks (0) | Category: Biological News | Cancer
Freeman Dyson has written about his belief that molecular biology is becoming a field where even basement tinkerers can accomplish things. Whether we're ready for it or not, biohacking is on its way. The number of tools available (and the amount of surplus equipment that can be bought) have him imagining a "garage biotech" future, with all the potential, for good and for harm, that that entails.
Well, have a look at this garage, which is said to be somewhere in Silicon Valley. I don't have any reason to believe the photos are faked; you could certainly put your hands on this kind of equipment very easily in the Bay area. The rocky state of the biotech industry just makes things that much more available. From what I can see, that's a reasonably well-equipped lab. If they're doing cell culture, there needs to be some sort of incubator around, and presumably a -80 degree freezer, but we don't see the whole garage, do we? I have some questions about how they do their air handling and climate control (although that part's a bit easier in a California garage than it would be in a Boston one). There's also the issue of labware and disposables. An operation like this does tend to run through a goodly amount of plates, bottles, pipet tips and so on, but I suppose those are piled up on the surplus market as well.
But what are these folks doing? The blog author who visited the site says that they're "screening for anti-cancer compounds". And yes, it looks as if they could be doing that, but the limiting reagent here would be the compounds. Cells reproduce themselves - especially tumor lines - but finding compounds to screen, that must be hard when you're working where the Honda used to be parked. And the next question is, why? As anyone who's worked in oncology research knows, activity in a cultured cell line really doesn't mean all that much. It's a necessary first step, but only that. (And how many different cell lines could these people be running?)
The next question is, what do they do with an active compound when they find one? The next logical move is activity in an animal model, usually a xenograft. That's another necessary-but-nowhere-near-sufficient step, but I'm pretty sure that these folks don't have an animal facility in the basement, certainly not one capable of handling immunocompromised rodents. So put me down as impressed, but puzzled. The cancer-screening story doesn't make sense to me, but is it then a cover for something else? What?
If this post finds its way to the people involved, and they feel like expanding on what they're trying to accomplish, I'll do a follow-up. Until then, it's a mystery, and probably not the only one of its kind out there. For now, I'll let Dyson ask the questions that need to be asked, from that NYRB article linked above:
If domestication of biotechnology is the wave of the future, five important questions need to be answered. First, can it be stopped? Second, ought it to be stopped? Third, if stopping it is either impossible or undesirable, what are the appropriate limits that our society must impose on it? Fourth, how should the limits be decided? Fifth, how should the limits be enforced, nationally and internationally? I do not attempt to answer these questions here. I leave it to our children and grandchildren to supply the answers.
+ TrackBacks (0) | Category: Biological News | Drug Assays | General Scientific News | Regulatory Affairs | Who Discovers and Why
March 9, 2010
Nature Biotechnology weighs in on the GSK/Sirtris controversy. They have a lot of good information, and I'm not just saying that because someone there has clearly read over the comments that have showed up to my posts on the subject. The short form:
The controversy over Sirtris drugs reached a tipping point in January with a publication by Pfizer researchers led by Kay Ahn showing that resveratrol activates SIRT1 only when linked to a fluorophore. Although Ahn declined to be interviewed by Nature Biotechnology, a statement issued by Pfizer says the group's findings “call into question the mechanism of action of resveratrol and other reported activators of the SIRT1 enzyme.”
Most experts, however, say it's too soon to write off Sirtris' compounds altogether, assuming they're clinically useful by mechanisms that don't involve sirtuin binding. And for its part, GSK won't concede that Sirtris' small molecules don't bind the targets. In an e-mailed statement, Ad Rawcliffe, head of GSK's WorldWide Business Development group, says, “There is nothing that has happened to date, including the publication [by Pfizer,] that suggests otherwise.”
We'll see if GSK and Sirtris have some more publications ready to silence their detractors. But what will really do that, and what we'll all have to wait for, are clinical results.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
March 5, 2010
There's a report in Nature on the bacteria found in the human gut that's getting a lot of press today (especially for a paper about, well, bacteria in the human gut). A team at the Beijing Genomics Institute, with many collaborators, has done a large shotgun sequencing effort on gut flora and identified perhaps one thousand different species.
I can well believe it. The book I recommended the other day on bacteria field marks has something to say about that, pointing out that if you're just counting cells, that the cells of our body are far outnumbered by the bacteria we're carrying with us. Of course, the bacteria have an advantage, being a thousand times smaller (or more) than our eukaryotic cells, but there's no doubt that we're never alone. In case you're wondering, the average European subject of the study probably carries between 150 and 200 different types of bacteria, so there's quite a bit of person-to-person variability. Still, a few species (mostly Bacteroides varieties) were common to all 124 patients in the study, while the poster child for gut bacteria (E. coli) is only about halfway down the list of the 75 most common organisms. We have some Archaea, too, but they're outnumbered about 100 to 1.
What's getting all the press is that idea that particular mixtures of intestinal bacteria might be contributing to obesity, cancer, Crohn's disease and other conditions. This isn't a new idea, although the new study does provide more data to shore it up (which was its whole purpose, I should add). It's very plausible, too: we already know of an association between Helicobacter and stomach cancer, and it would be surprising indeed if gut bacteria weren't involved with conditions like irritable bowel syndrome or Crohn's. This paper confirms earlier work that such patients do indeed have distinctive microbiota, although it certainly doesn't solve the cause-or-effect tangle that such results always generate.
The connection with obesity is perhaps more of a stretch. You can't argue with thermodynamics. Clearly, people are obese because they're taking in a lot more calories than they're using up, and doing that over a long period. So what do bacteria have to do with that? The only thing I can think of is perhaps setting off inappropriate food cravings. We're going to have to be careful with that cause and effect question here, too.
One problem I have with this work, though, is the attitude of the lead author on the paper, Wang Jun. In an interview with Reuters, he makes a very common mistake for an academic: assuming that drug discovery and treatment is the easy part. After all, the tough work of discovery has been done, right?
"If you just tackle these bacteria, it is easier than treating the human body itself. If you find that a certain bug is responsible for a certain disease and you kill it, then you kill the disease," Wang said
For someone who's just helped sequence a thousand of them, Wang doesn't have much respect for bacteria. But those of us who've tried to discover drugs against them know better. Where are these antibiotics that kill single species of bacteria? No such thing exists, to my knowledge. To be sure, we mostly haven't looked, since the need is for various broader-spectrum agents, but it's hard to imagine finding a compound that would kill off one Clostridium species out of a bunch. And anyway, bacteria are tough. Even killing them off wholesale in a human patient can be very difficult.
Even if we magically could do such things, there's the other problem that we have no idea of which bacterial strains we'd want to adjust up or down. The Nature paper itself is pretty good on this topic, emphasizing that we really don't know what a lot of these bacteria are doing inside us and how they fit into what is clearly a very complex and variable ecosystem. A look at the genes present in the samples shows the usual common pathways, then a list that seem to be useful for survival in the gut (adhesion proteins, specific nutrient uptake), and then a massive long tail of genes that do we know not what nor why. Not only do we not know what's happening on other planets, or at the bottom of our own oceans, we don't even know what's going on in our own large intestines. It's humbling.
Dr. Wang surely realizes this; I just wish he'd sound as if he does.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases
March 2, 2010
From Nature comes word of a brainlessly restrictive new law that's about to pass in Turkey. The country started out trying to get in line with EU regulations on genetically-modified crops, and ended up with a bill that forbids anyone to modify the DNA of any organism at all - well, unless you submit the proper paperwork, that is:
. . .Every individual procedure would have to be approved by an inter-ministerial committee headed by the agriculture ministry, which is allowed 90 days to consider each application with the help of experts.
The committee would be responsible for approving applications to import tonnes of GM soya beans for food — but also for every experiment involving even the use of a standard plasmid to transfer genes into cells. Work with universally used model organisms, from mice and zebrafish to fruitflies and bacteria, would be rendered impossible. Even if scientists could afford to wait three months for approval of the simplest experiment, the committee would be overwhelmed by the number of applications. One Turkish scientist who has examined the law estimates that his lab alone would need to submit 50 or so separate applications in a year.
It's no doubt coming as a surprise to them that biologists modify the DNA of bacteria and cultured mammalian cells every single day of the week. Actually, it might come as a surprise to many members of the public, too - we'll see if this becomes a widespread political issue or not. . .
+ TrackBacks (0) | Category: Biological News | Regulatory Affairs
February 18, 2010
I've been meaning to write about this paper in PNAS for a while. The authors (from Cal Tech and the Weizmann Institute) have set up a new web site, are calling for a more quantitative take on biological questions. They say that modern techniques are starting to give up meaningful inputs, and that we're getting to the point where this perspective can be useful. A web site, Bionumbers, has been set up to provide ready access to data of this sort, and it's well worth some time just for sheer curiosity's sake.
But there's more than that at work here. To pick an example from the paper, let's say that you take a single E. coli bacterium and put it into a tube of culture medium, with only glucose as a carbon source. Now, think about what happens when this cell starts to grow and divide, but think like a chemist. What's the limiting reagent here? What's the rate-limiting step? Using the estimates for the size of a bacterium, its dry mass, a standard growth rate, and so on, you can arrive at a rough figure of about two billion sugar molecules needed per cell division.
Of course, bacteria aren't made up of glucose molecules. How much of this carbon got used up just to convert it to amino acids and thence to proteins (the biggest item on the ledger by far, it turns out), to lipids, nucleic acids, and so on? What, in other words, is the energetic cost of building a bacterium? The estimate is about four billion ATPs needed. Comparing that to those two billion sugar molecules, and considering that you can get up to 30 ATPs per sugar under aerobic conditions, and you can see that there's a ten to twentyfold mismatch here.
Where's all the extra energy going? The best guess is that a lot of it is used up in keeping the cell membrane going (and keeping its various concentration potentials as unbalanced as they need to be). What's interesting is that a back-of-the-envelope calculation can quickly tell you that there's likely to be some other large energy requirement out there that you may not have considered. And here's another question that follows: if the cell is growing with only glucose as a carbon source, how many glucose transporters does it need? How much of the cell membrane has to be taken up by them?
Well, at the standard generation time in such media of about forty minutes, roughly 10 to the tenth carbon atoms need to be brought in. Glucose transporters work at a top speed of about 100 molecules per second. Compare the actual surface area of the bacterial cell with the estimated size of the transporter complex. (That's about 14 square nanometers, if you're wondering, and thinking of it in those terms gives you the real flavor of this whole approach). At six carbons per glucose, then, it turns out that roughly 4% of the cell surface must taken up with glucose transporters.
That's quite a bit, actually. But is it the maximum? Could a bacterium run with a 10% load, or would another rate-limiting step (at the ribosome, perhaps?) make itself felt? I have to say, I find this manner of thinking oddly refreshing. The growing popularity of synthetic biology and systems biology would seem to be a natural fit for this kind of thing.
It's all quite reminiscent of the famous 2002 paper (PDF) "Can A Biologist Fix a Radio", which called (in a deliberately provocative manner) for just such thinking. (The description of a group of post-docs figuring out how a radio works in that paper is not to be missed - it's funny and painful/embarrassing in almost equal measure). As the author puts it, responding to some objections:
One of these arguments postulates that the cell is too complex to use engineering approaches. I disagree with this argument for two reasons. First, the radio analogy suggests that an approach that is inefficient in analyzing a simple system is unlikely to be more useful if the system is more complex. Second, the complexity is a term that is inversely related to the degree of understanding. Indeed, the insides of even my simple radio would overwhelm an average biologist (this notion has been proven experimentally), but would be an open book to an engineer. The engineers seem to be undeterred by the complexity of the problems they face and solve them by systematically applying formal approaches that take advantage of the ever-expanding computer power. As a result, such complex systems as an aircraft can be designed and tested completely in silico, and computer-simulated characters in movies and video games can be made so eerily life-like. Perhaps, if the effort spent on formalizing description of biological processes would be close to that spent on designing video games, the cells would appear less complex and more accessible to therapeutic intervention.
But I'll let the PNAS authors have the last word here:
"It is fair to wonder whether this emphasis on quantification really brings anything new and compelling to the analysis of biological phenomena. We are persuaded that the answer to this question is yes and that this numerical spin on biological analysis carries with it a number of interesting consequences. First, a quantitative emphasis makes it possible to decipher the dominant forces in play in a given biological process (e.g., demand for energy or demand for carbon skeletons). Second, order of magnitude BioEstimates merged with BioNumbers help reveal limits on biological processes (minimal generation time or human-appropriated global net primary productivity) or lack thereof (available solar energy impinging on Earth versus humanity’s demands). Finally, numbers can be enlightening by sharpening the questions we ask about a given biological problem. Many biological experiments report their data in quantitative form and in some cases, as long as the models are verbal rather than quantitative, the theor y will lag behind the experiments. For example, if considering the input–output relation in a gene-regulatory net work or a signal- transduction network, it is one thing to say that the output goes up or down, it is quite another to say by how much..
+ TrackBacks (0) | Category: Biological News | Who Discovers and Why
January 27, 2010
It hit me, one day during my graduate career, that I was spending my nights, days, weekends, and holidays trying to make a natural product, while the bacterium that produced the thing in the first place was sitting around in the dirt of a Texas golf course, making the molecule at ambient temperature in water and managing to perform all its other pressing business at the same time. This put me in my place. I've respected biosynthesis ever since.
But there are some areas where we humans can still outproduce the small-and-slimies, and one of those is in organofluorine compounds. Fluorine's a wonderful element to use in medicinal chemistry, since it alters the electronic properties of your molecule without changing its shape (or adding much weight), and the C-F bond is metabolically inert. But those very properties can make fluorination a tricky business. If you can displace a leaving group with fluoride ion to get your compound, then good for you. Too often, though, those charges are the wrong way around, and electrophilic fluorination is the only solution. There are heaps of different ways to do this in the literature, which is a sign to the experienced chemist that there are no general methods to be had. (That's one of my Laws of the Lab, actually). The reagents needed for these transformations start with a few in the Easily Dealt With category, wind entertainingly through the Rather Unusual, and rapidly pile up over at the Truly Alarming end.
But at least we can get some things to work. The natural products with fluorine in them can be counted on the fingers. A fluorinase enzyme has been isolated which does the biotransformation on
4-fluorothreonine S-adenosyl methionine (using fluoride ion, naturally - if an enzyme is ever discovered that uses electrophilic F-plus as an intermediate, I will stand at attention and salute it). And now comes word that this has been successfully engineered into another bacterial species, and used to produce a fluorine analog of that bacterium's usual organochlorine natural product.
It isn't pretty, but it does work. One big problem is that the fluoride ion the enzyme needs is toxic to the rest of the organism, so you can't push this system too hard. But the interest in this sort of transformation is too high (and the potential stakes too lucrative) to keep it from being obscure forever. Bring on the fluorinating enzymes!
+ TrackBacks (0) | Category: Biological News
January 22, 2010
I've written here before about how I used to think that I understood G-protein coupled receptors (GPCRs), but that time and experience have proven to me that I didn't know much of anything. One of the factors that's complicated that field is the realization that these receptors can interact with each other, forming dimers (or perhaps even larger assemblies) which presumably are there for some good reason, and can act differently from the classic monomeric form.
A neat paper has appeared in PNAS that gives us some quantitative numbers on this phenomenon, and some great pictures as well. What you're looking at is a good ol' CHO cell, transfected with muscarinic M1 receptors. Twenty years ago (gulp) I was cranking out compounds to tickle cell membranes of this exact type, among others. The receptors are visualized by a fluorescent ligand (telenzepine), and the existence of dimers can be inferred by the "double-intensity" spots shown in the inset.
With this kind of resolution and time scale, the UK team that did this work could watch the receptors wandering over the cell surface in real time. It's a classic random walk, as far as they can tell. Watching the cohort of high-intensity spots, they can see changes as they switch to lower-intensity monomers and back again. Over a two-second period, it appeared that about 81% of the tracks were monomers, 9% were dimers, and 3% changed over during the tracking. (The remaining 7% were impossible to assign with confidence, which makes me wonder what's lurking down there).
They refined the technique by using two differently-fluorescent forms of labeled telenzepine, labeling the cells in a 50/50 ratio, and watching what happens to the red, green, (and combined yellow) spots over time. It looks as if the receptor population is a steady-state mix of monomers and dimers, exchanging on a time scale of seconds. Of course, the question comes up of how different ligands might affect this process, and you could begin to answer that with different fluorescent species. But since the technique depends on having a low-off-rate species bound to the receptor in order to see it, some of the most interesting dynamic questions will have to wait. It's still very nice to actually see these things, though; it gives a medicinal chemist something to picture. . .
+ TrackBacks (0) | Category: Biological News
This one's also from the Department of Placebo Effects - read on. An interesting paper out in Nature details a study where volunteers took small doses of testosterone or placebo, and then participated in a standard behavioral test, the "Ultimatum Game". That's the one where two people participate, with one of them given a sum of money (say, $10), that's to be divided between the two of them. The player with the money makes an offer to divide the pot, which the other player can only take or leave (no counteroffers). A number of interesting questions about altruism and competition have been examined through this game and its variants - basically, the first thing to ask is how much the "dictator" player will feel like offering at all. (If you like, here's the Freakonomics guys talking about the game, which features in a chapter of their latest, SuperFreakonomics).
What's been found in many studies is that the second players often reject offers that they feel are insultingly low, giving up a sure gain for the sake of pride and sending a message to the first player. I think of this as the "Let me tell you what you can do with your buck-fifty" option. So what does exposure to testosterone do for this behavior? As the authors of the new paper talk about, there are two (not necessarily exclusive) theories about some of the hormone's effects. Increases in aggression and competitiveness are widely thought to be one of these, but there's also a good amount of literature to suggest that status-seeking behavior is perhaps more important. But if someone is going to be aggressive about the ultimatum game, they're going to make a lowball offer and damn the consequences, whereas if they're looking for status, they may well choose a course that avoids having their offer thrown back in their face.
Using known double-blind conditions for testosterone dosing in female subjects (sublingual dosing four hours before the test), the second behavior was observed. Update: keep in mind, women have endogenous testosterone, too. The subjects who got testosterone made more generous offers (from about $3.50 to closer to $4.00). The error bars on that measurement just miss overlapping, p = 0.031. But here's the part I found even more interesting: the subjects who believed that they got testosterone made significantly less fair/generous offers than the ones who believed that they got the placebo (P = 0.006). Because, after all, testosterone makes you all tough and nasty, as everyone knows. As the authors sum it up:
"The profound impact of testosterone on bargaining behaviour supports the view that biological factors have an important role in human social interaction. This does, of course, not mean that psychological factors are not important. In fact, our finding that subjects’ beliefs about testosterone are negatively associated with the fairness of bargaining offers points towards the importance of psychological and social factors. Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour. Our findings also teach an important methodological lesson for future studies: it is crucial to control for subjects’ beliefs because the pure substance effect may be otherwise under- or overestimated. . ."
+ TrackBacks (0) | Category: Biological News | General Scientific News | The Central Nervous System
January 21, 2010
I promise you that. Take a look at this abstract:
". . .an unappreciated physicochemical property of xenon has been that this gas also binds to the active site of a series of serine proteases. Because the active site of serine proteases is structurally conserved, we have hypothesized and investigated whether xenon may alter the catalytic efficiency of tissue-type plasminogen activator (tPA), a serine protease that is the only approved therapy for acute ischemic stroke today."
They go on to provide evidence that xenon is indeed a tPA inhibitor. And as it turns out, there's more evidence for xenon having a number of physiological effects, and enzyme inhibition has been proposed as one mechanism. Who knew?
Now, there's an SAR challenge. . .
+ TrackBacks (0) | Category: Biological News
January 18, 2010
Anyone looking over large data sets from human studies needs to be constantly on guard. Sinkholes are everywhere, many of them looking (at first glance) like perfectly solid ground on which to build some conclusions. This, to be honest, is one of the real problems with full release of clinical trial data sets: if you're not really up on your statistics, you can convince yourself of some pretty strange stuff.
Even people who are supposed to know what they're doing can bungle things. For instance, you may well have noticed a lot of papers coming out in the last few years correlating neuroimaging studies (such as fMRI) with human behaviors and personality traits. Neuroimaging is a wonderfully wide-open, complex, and important field, and I don't blame people for a minute for pushing it as far as it can go. But just how far is that?
A recent paper (PDF) suggests that the conclusions have run well ahead of the numbers. Recent papers have been reporting impressive correlations between the activation of particular brain regions and associated behaviors and traits. But when you look at the reproducibility of the behavioral measurements themselves, the correlation is 0.8 at best. And the reproducibility of the blood-oxygen fMRI measurements is about 0.7. The highest possible correlation you could expect from those two is the square root of their product, or 0.74. Problem is. . .a number of papers, including ones that get the big press, show correlations much higher than that. Which is impossible.
The Neurocritic blog has more details on this. What seems to have happened is that many researchers found signals in their patients that correlated with the behavior that they were studying, and then used that same set of data to compute the correlations between the subjects. I find, by watching people go by the in the street, that I can pick out a set of people who wear bright red jackets and have ugly haircuts. Herding them together and rating them on the redness of their attire and the heinousness of their hair, I find a notably strong correlation! Clearly, there is an underlying fashion deficiency that leads to both behaviors. Or people had their hair in their eyes when they bought their clothes. Further studies are indicated.
No, you can't do it like that. A selection error of that sort could let you relate anything to anything. The authors of the paper (Edward Vul and Nancy Kanwisher of MIT) have done the field a great favor by pointing this out. You can read how the field is taking the advice here.
+ TrackBacks (0) | Category: Biological News | Clinical Trials | The Central Nervous System
January 11, 2010
I do enjoy some good chemical biology, and the latest Cell has another good example from the Cravatt group at Scripps (working with a team at Brigham and Women's Hospital over here on this coast). What they've done is profile various types of tumor cells using an activity-based probe to search for changes in serine hydrolase enzymes. Those are a large and diverse class (with quite a few known drug targets in them already), and there had already been reports that activity in this area was altered as cancer cell lines became more aggressive.
What they tracked down was an enzyme called MAGL (monoacylglyceride lipase). That's an interesting finding. Cancer cells have long been known to have different ideas about lipid handling, and several enzymes in that metabolic area have been proposed over the years as drug targets. (The first one I can think of is fatty acid synthase (FAS), whose elevated presence has been correlated with poor outcome in several tumor types). In general, aggressive tumor cells seem to run with higher levels of free fatty acids, for reasons that aren't quite clear. Some of the downstream products are signaling molecule, and some of these lipids may just be needed for elevated levels of cell membrane synthesis.
But it looks from this paper as if MAGL could be the real lipid-handling target that oncology people have been looking for, though. The teams inhibited the enzyme with a known small molecule (well, relatively small), and also via RNA knockdown, and in both cases they were able to disrupt growth of tumor cell lines. The fiercer the cells, the more they were affected, which tracked with the MAGL activity they had initially. On the other hand, inducing higher expression of MAGL in relatively tame tumor cells turned them aggressive and hardy. They have a number of lines of evidence in this paper, and they all point the same way.
One of those might be important for other reasons. The teams took the cell lines with impaired MAGL activity, and wondered if this could be rescued by providing them with the expected products that the enzyme would deliver. Stearic and palmitic acid are two of the fatty acids whose levels seem to be heavily regulated by MAGL, and sure enough, providing the MAGL-deficient cells with these restored their growth and mobility. As the paper points out specifically, this could have implications for a relationship between obesity and tumorigenesis. (I'd add a recommendation to look with suspicion at other conditions that lead to higher-than-usual levels of circulating free fatty acids, such as type II diabetes, or even fasting).
It may be that I particularly enjoyed this paper because I have a lipase-inhibiting past. As anyone who's run my name through SciFinder or Google Scholar has noticed, I helped lead a team some years ago that developed a series of inhibitors for hormone-sensitive lipase, a potential diabetes target. We were scuppered, though, by the fact that this enzyme does (at least) two different things in two totally different kinds of tissue. Out in fat and muscle, it helps hydrolyze glycerides (in fact, it's right in the same metabolic line as MAGL), and that's the activity we were targeting. But in steroidogenic tissues, it's known as neutral cholesteryl ester hydrolase, and it breaks those down to provide cholesterol for steroid biosynthesis. Unfortunately, when you inhibit HSL, you also do nasty things to the adrenals and a few other tissues. There's no market for a drug that gives you Addison's disease, I can tell you.
So I wondered when I saw this paper if MAGL has a dual life as well. If I'd ever worked in analgesia or cannabinoid receptor pharmacology, though, I'd have already known the answer. MAGL also regulates the levels of several compounds that signal through the endocannabinoid pathway, and has been looked at as a target in those areas. None of this seems to have an affect on the oncology side of things, though - this latest paper also looked at CB receptor effects on their cell lines that were deficient in MAGL, and found no connection there.
So, what we have from this paper is a very interesting cancer target (whose crystal structure was recently reported, to boot), a new appreciation of lipid handling in tumors, and a possible rationale for the connections seen between lipid levels and cancer in general. Not bad!
Special bonus: thanks to Cell's video abstracts, you can hear Ben Cravatt and his co-worker Dan Nomura explain their paper on YouTube. The journal has recently enhanced the way their papers are presented online, actually, and I plan to do a whole separate blog entry on that (and on video abstracts and the like).
+ TrackBacks (0) | Category: Biological News | Cancer | Diabetes and Obesity
January 7, 2010
Last fall it was reported that a large proportion of patients suffering from chronic fatigue syndrome also showed positive for a little-understood retrovirus (XMRV). This created a lot of understandable excitement for sufferers of a conditions that (although often ill-defined) seems to have some puzzling biology buried in it somewhere.
Well, let the fighting begin: a new paper in PLoS One has challenged this correlation. Groups from Imperial College and King's College have failed to detect any XMRV in a similar patient population:
. . .Unlike the study of Lombardi et al., we have failed to detect XMRV or closely related MRV proviral DNA sequences in any sample from CFS cases. . .Based on our molecular data, we do not share the conviction that XMRV may be a contributory factor in the pathogenesis of CFS, at least in the U.K.
Interestingly, XMRV has also been reported in tissue from prostate cancer patients, but recent studies in Germany and Ireland failed to replicate these results. Could we be looking at a geographic coincidence, a retroviral infection that's found in North America but not in Europe, and one whose connection with these diseases is either complex or nonexistent?
Note: as per a comment on this post, the Whittemore Peterson Institute is firing back, claiming that their original work is valid and that the London study has many significant differences. PDF of their release here.
+ TrackBacks (0) | Category: Biological News | Cancer | Infectious Diseases
Now here's a strange tale, courtesy of Science magazine, about some retracted work from Peter Schultz's group at Scripps. Two papers from 2004 detailed how to incorporate glycoslylated amino acids (glucosamine-serine and galactosamine-threonine) directly into proteins. These featured a lot of work from postdoc Zhiwen Zhang (who later was hired by the University of Texas for a faculty position).
But another postdoc, Eric Tippmann, was later having trouble reproducing the work, and in 2006 he made his case for why he thought it was incorrect. Following that:
Schultz says the concerns raised were serious enough that he asked a group of lab members to try to replicate the work in Zhang's Science paper in addition to several other important discoveries Zhang had made. That task, however, was complicated by the fact that Zhang's lab notebooks, describing his experiments in detail, were missing. Schultz says that in the early fall of 2006, the notebooks were in Schultz's office. But at some point after that they were taken without his knowledge and have never resurfaced.
After considerable effort, Schultz says his students were able to replicate most of the work. The biggest exception was the work that served as the basis for the 2004 Science and JACS papers. "It was clear the glycosylated amino acid work could not be reproduced as reported. So we tried to figure out what was going on," Schultz says.
So far, so not-so-good. But here's where things get odd. Around this time (early 2007), Zhang started to get e-mails at Texas saying that unless he send $4000 to an address in San Diego, the writer would expose his "fraud" and cause him to get fired. The messages were signed "Michael Pemulis" - Science doesn't pick up on that pen name, but fans of the late David Foster Wallace will recognize the name of the revengeful practical joker from Infinite Jest.
That brings up another point: the e-mails quoted in the Science article are in somewhat broken English: "you lose job. ... Texas will fire you before you tenure. . ." and that sort of thing. But my belief is that no one who drops the second person possessive while writing would make it far enough into Infinite Jest to meet Micheal Pemulis and use him as an appropriate alias for an extortion plot.
At any rate, after the San Diego police got involved, they told Zhang that they had a suspect, but Zhang decided not to press charges. That fall, though, "Pemulis" dropped the bomb, with a hostile anonymous letter to everyone involved - officials at Scripps and UT-Austin, the editors at Science, etc. In 2009, Zhang was denied tenure. Eric Tippman (now at Cardiff) has published a paper in JBC detailing the problems with the original work. (He denies having anything to do with the missing lab notebooks or the threats made to Zhang). And everyone involved is still wondering just what is going on. . .
I certainly have no idea. But I can say this: although I've spent a lot more time in industry than in academia, a disproportionate number of the people I've worked with over the years that I consider to have had serious mental problems are still from my academic years. Whoever "Pemulis" is, I'd put him or her into that category. Grad students and post-docs are under a lot of pressure, and some of them are at a point in their lives when their internal problems are starting to seriously affect them.
+ TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature
January 6, 2010
Xconomy has a piece on biotechnologies that look to be headed for obsolescence. I think the list is mostly correct - it includes the raw proteomic approach to understanding disease states and a lot of the biomarker work being done currently. I won't spoil the rest of the list; take a look and see what you think. Note: RNA interference is not on it, in case you're wondering. Nor are stem cells.
+ TrackBacks (0) | Category: Biological News
January 5, 2010
I missed this paper when it came out back in October: "Reactome Array: Forging a Link Between Metabolome and Genome". I'd like to imagine that it was the ome-heavy title itself that drove me away, but I have to admit that I would have looked it over had I noticed it.
And I probably should have, because the paper has been under steady fire since it came out. It describes a method to metabolically profile a variety of cells though the use of a novel nanoparticle assay. The authors claim to have immobilized 1675 different biomolecules (representing common metabolites and intermediates) in such a way that enzymes recognizing any of them will set off a fluorescent dye signal. It's an ingenious and tricky method - in fact, so tricky that doubts set in quickly about the feasibility of doing it on 1675 widely varying molecular species.
And the chemistry shown in the paper's main scheme looks wonky, too, which is what I wish I'd noticed. Take a look - does it make sense to describe a positively charged nitrogen as a "weakly amine region", whatever that is? Have you ever seen a quaternary aminal quite like that one before? Does that cleavage look as if it would work? What happens to the indane component, anyway? Says the Science magazine blog:
In private chats and online postings, chemists began expressing skepticism about the reactome array as soon as the article describing it was published, noting several significant errors in the initial figure depicting its creation. Some also questioned how a relatively unknown group could have synthesized so many complex compounds. The dismay grew when supplementary online material providing further information on the synthesized compounds wasn’t available as soon as promised. “We failed to put it in on time. The data is quite voluminous,” says co-corresponding author Peter Golyshin of Bangor University in Wales, a microbiologist whose team provided bacterial samples analyzed by Ferrer’s lab.
Science is also coming under fire. “It was stunning no reviewer caught [the errors],” says Kiessling. Ferrer says the paper’s peer reviewers did not raise major questions about the chemical synthesis methods described; the journal’s executive editor, Monica Bradford, acknowledged that none of the paper’s primary reviewers was a synthetic organic chemist. “We do not have evidence of fraud or fabrication. We do have concerns about the inconsistencies and have asked the authors' institutions to try to sort all of this out by examining the original data and lab notes,” she says.
The magazine published an "expression of concern" before the Christmas break, saying that in response to questions the authors had provided synthetic details that "differ substantially" from the ones in the original manuscript. An investigation is underway, and I'll be very interested to see what comes of it.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Assays | The Scientific Literature
December 9, 2009
Back in September, talking about the insides of cells, I said:
There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like. . ."
But is that right? I was reading this new paper in JACS, where a group at UNC is looking at the NMR of fluorine-labeled proteins inside E. coli bacteria. (It's pretty interesting, not least because they found that they can't reproduce some earlier work in the field, for reasons that seem to have them throwing their hands up in the air). But one reference caught my eye - this paper from PNAS last year, from researchers in Sweden.
That wasn't one that I'd read when it came out - the title may have caught my eye, but the text rapidly gets too physics-laden for me to follow very well. The UNC folks appear to have waded through it, though, and picked up some key insights which otherwise I'd have missed. The PNAS paper is a painstaking NMR analysis of the states of water molecules inside bacterial cells. They looked at both good ol' E. coli and at an extreme halophile species, figuring that that one might handle its water differently.
But in both cases, they found that about 85% of the water molecules had rotational states similar to bulk water. That surprises me (as you'd figure, given the views I expressed above). I guess my question is "how similar?", but the answer seems to be "as similar as we can detect, and that's pretty good". It looks like all the water molecules past the first layer on the proteins are more or less indistinguishable from plain water by their method. (No difference between the two types of bacteria, by the way). And given that the concentration of proteins, carbohydrates, salts, etc. inside a cell is rather different than bulk water, I have to say I'm at a loss. I wonder how different the rotational states of water are (as measured by NMR relaxation times) for samples that are, say, 1M in sodium chloride, guanidine, or phosphate?
The other thing that struck me was the Swedish group's estimate of protein dynamics. They found that roughly half of the proteins in these cells were rotationally immobile, presumably bound up in membranes or in multi-protein assemblies. It's been clear for a long time that there has to be a lot of structural order in the way proteins are arranged inside a living cell, but that might be even more orderly than I'd been picturing. At any rate, I may have to adjust my thinking about what those environments look like. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
November 5, 2009
Resveratrol's a mighty interesting compound. It seems to extend lifespan in yeast and various lower organisms, and has a wide range of effects in mice. Famously, GlaxoSmithKline has expensively bought out Sirtris, a company whose entire research program started with resveratrol and similar compound that modulate the SIRT1 pathway.
But does it really do that? The picture just got even more complicated. A group at Amgen has published a paper saying that when you look closely, resveratrol doesn't directly affect SIRT1 at all. Interestingly, this conclusion has been reached before (by a group at the University of Washington), and both teams conclude that the problem is the fluorescent peptide substrate commonly used in sirtuin assays. With the fluorescent group attached, everything looks fine - but when you go to the extra trouble of reading things out without the fluorescent tag, you find that resveratrol doesn't seem to make SIRT1 do anything to what are supposed to be its natural substrates.
"The claim of resvertraol being a SIRT1 activator is likely to be an experimental artifact of the SIRT1 assay that employs the Fluor de Lys-SIRT1 peptide as a substrate. However, the beneficial metabolic effects of resveratrol have been clearly demonstrated in diabetic animal models. Our data do not support the notion that these metabolic effects are mediated by direct SIRT1 activation. Rather, they could be mediated by other mechanisms. . ."
They suggest activation of AMPK (an important regulatory kinase that's tied in with SIRT1) as one such mechanism, but admit that they have no idea how resveratrol might activate it. Does that process still require SIRT1 at all? Who knows? One thing I think I do know is that this has something to do with this Amgen paper from 2008 on new high-throughput assays for sirtuin enzymes.
One wonders what assay formats Sirtris has been using to evaluate their new compounds, and one also wonders what they make of all this now at GSK. Does one not? We can be sure, though, that there are plenty of important things that we don't know yet about sirtuins and the compounds that affect them. It's going to be quite a ride as we find them out, too.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Drug Assays
October 28, 2009
Now here's a completely weird idea: a group in Korea has encapsulated individual living yeast cells in silica. They start out by coating the cells with some charged polymers that are known to serve as a good substrate for silication, and then expose the yeast to silicic acid solution. They end up with hard-shell yeast, sort of halfway to being a bizarre sort of diatom.
The encapsulated cells behave rather differently, as no doubt would we all under such conditions. After thirty days in the cold with no nutrients, the silica-coated yeast is at least three times more viable than wild-type cells (as determined by fluorescent staining). On the other hand, when exposed to a warm nutrient broth, the silica-coated yeast does not divide, as opposed to wild-type yeast, which of course takes off like a rocket under such conditions. They're still alive, but just sitting around - which makes you wonder what signals, exactly, are interrupting mitosis.
The authors tried the same trick on E. coli bacteria, but found that the initial polymer coating step killed them off. That's disappointing, but not surprising, given that disruption of the bacterial membrane with charged species is the mode of action of several broad-spectrum antibiotics.
"Hmmm. . .so what?" might be one reaction to this work. But stop and think about it for a minute. This provides a new means to an biological/inorganic interface, a way to stich cell biology and chemical nanotechnology together. If you can layer yeast cells with silica and they survive (and are, in fact, fairly robust), you can imagine gaining more control over the process and extending it to other substances. A layer that could at least partially conduct electricity would be very interesting, as would layers with various-sized pores built into them. The surfaces could be further functionalized with all sorts of other molecules as well for more elaborate experiments. No, this could keep a lot of people busy for a long time, and I suspect it will.
+ TrackBacks (0) | Category: Biological News
October 16, 2009
There have been several reports over the years of people engineering receptor proteins to make them do defined tasks. They've generally been using the bacterial periplasmic binding proteins (PBPs) as a starting point, attaching some sort of fluorescent group onto one end, so that when a desired ligand binds, the protein folds in on itself in a way to set off a fluorescent resonance energy transfer (FRET). That's a commonly used technique to see if two proteins are in close proximity to each other; it's robust enough to be used in many high-throughput screening assays.
So the readout isn't the problem. But something else certainly is. In a new PNAS paper, a group at the Max Planck Institute in Tübingen has gone back and taken a look at these receptors, which are reported to bind a number of interesting ligands such as serotonin, lactate, and even TNT and a model for nerve gas agents. You can see the forensic applications for those latter two if the technique worked well, and the press releases were rather breathless, as they tend to be. But not only did these workers claim a very interesting sensor system, but they also went out of their way to emphasize that they arrived at these results computationally:
Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, l-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or l-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally.
The Max Planck group would like to disagree with that. Their PNAS paper is entitled "Computational Design of Ligand Binding is Not a Solved Problem". They were able to get crystals of the serotonin-binding protein, but could not get any X-ray structures that showed any serotonin binding in the putative ligand pocket. They then turned to a well-known suite of techniques to characterize ligand binding. One of these is thermal stability: when a protein is binding a high-affinity ligand, it tends to show a higher melting point, since its structure is often more settled-down than the open form. None of the reported receptors showed any such behavior, and all of them were substantially less thermally stable than the wild-type proteins. Strike one.
They then tried ITC, a calorimetry measurement to look for heat of binding. A favorable binding event releases heat - it's a lower-energy state - but none of the engineered receptors showed any changes at all when their supposed ligands were introduced. Strike two. And finally, they turned to NMR experiments, which are widely used to determine protein structure and characterize binding of small molecules. WIld-type proteins of this sort showed exactly what they should have: big conformational changes when their ligands were present. But the engineered proteins showed almost no changes at all. Strike three, and as far as I'm concerned, these pieces of evidence absolutely close the case. These so-called receptors aren't binding anything.
So why do they show FRET signals? The authors suggest that this is some sort of artifact, not related to real receptor binding and note dryly that "Our analysis shows the importance of experimental and structural validation to improve computational design methodologies".
I should also note a very interesting sidelight: the same original research group also published a paper in Science on turning these computationally engineered PBPs into a functional enzyme. Unfortunately, this was retracted last year, when it turned out that the work could not be reproduced. Some wild-type enzyme was still present as an impurity, and when the engineered protein was rigorously purified, the activity went away. (Update: more on this retraction here, and there is indeed more to it). It appears that some other results from this work may be going away now, too. . .
+ TrackBacks (0) | Category: Biological News
October 7, 2009
This was another Biology-for-Chemistry year for the Nobel Committee. Venkatraman Ramakrishnan (Cambridge), Thomas Steitz (Yale) and Ada Yonath (Weizmann Inst.) have won for X-ray crystallographic studies of the ribosome.
Ribosomes are indeed significant, to put it lightly. For those outside the field, these are the complex machines that ratchet along a strand of messenger RNA, reading off its three-letter codons, matching these with the appropriate transfer RNA that's bringing in an amino acid, then attaching that amino acid to the growing protein chain that emerges from the other side. This is where the cell biology rubber hits the road, where the process moves from nucleic acids (DNA going to RNA) and into the world of proteins, the fundamental working units of a day-to-day living cell.
The ribosome has a lot of work to do, and it does it spectacularly quickly and well. It's been obvious for decades that there was a lot of finely balanced stuff going on there. Some of the three-letter codons (and some of the tRNAs) look very much like some of the others, so the accuracy of the whole process is very impressive. If more proofs were needed, it turned out that several antibiotics worked by disrupting the process in bacteria, which showed that a relatively small molecule could throw a wrench into this much larger machinery.
Ribosomes are made out of smaller subunits. A huge amount of work in the earlier days of molecular biology showed that the smaller subunit (known as 30S for how it spun down in a centrifuge tube) seemed to be involved in reading the mRNA, and the larger subunit (50S) was where the protein synthesis was taking place. Most of this work was done on bacterial ribosomes, which are relatively easy to get ahold of. They work in the same fashion as those in higher organisms, but have enough key differences to make them of interest by themselves (see below).
During the 1980s and early 1990s, Yonath and her collaborators turned out the first X-ray structures of any of the ribosomal subunits. Fuzzy and primitive by today's standards, those first data sets got better year by year, thanks in part to techniques that her group worked out first. (The use of CCD detectors for X-ray crystallography, a technology that was behind part of Tuesday's Nobel in Physics, was another big help, as was the development of much brighter and more focused X-ray sources). Later in the 1990s, Steitz and Ramakrishnan both led teams that produced much higher-resolution structures of various ribosomal subunits, and solved what's known as the "phase problem" for these. That's a key to really reconstructing the structure of a complex molecule from X-ray data, and it is very much nontrivial as you start heading into territory like this. (If you want more on the phase problem, here's a thorough and comprehensive teaching site on X-ray crystallography from Cambridge itself).
By the early 2000s, all three groups were turning out ever-sharper X-ray structures of different ribosomal subunits from various organisms. The illustration above, courtesy of the Nobel folks, shows the 50S subunit at 9-angstrom (1998), 5-angstrom (1999) and 2.4-angstrom (2000) resolution, and shows you how quickly this field was advancing. Ramakrishnan's group teased out many of the fine details of codon recognition, and showed how some antibiotics known to cause the ribosome to start bungling the process were able to to work. It turned out that the opening and closing behavior of the 30S piece was a key for this whole process, with error-inducing antibiotics causing it to go out of synch. And here's a place where the differences between bacterial ribosomes and eukaryotic ones really show up. The same antibiotics can't quite bind to mammalian ribosomes, fortunately. Having the protein synthesis machinery jerkily crank out garbled products is just what you'd wish for the bacteria that are infecting you, but isn't something that you'd want happening in your own cells.
At the same time, Steitz's group was turning out better and better structures of the 50S subunit, and helping to explain how it worked. One surprise was that there was a highly ordered set of water molecules and hydrogen bonds involved - in fact, protein synthesis seems to be driven (energetically) almost entirely by changes in entropy, rather than enthalpy. Both his group and Ramakrishnan's have been actively turning out structures of the ribosome subunits in complex with various proteins that are known to be key parts of the process, and those mechanisms of action are still being unraveled as we speak.
The Nobel citation makes reference to the implications of all this for drug design. I'm of two minds on that. It's certainly true that many important antibiotics work at the ribosomal level, and understanding how they do that has been a major advance. But we're not quite to the point where we can design new drugs to slide right in there and do what we want. I personally don't think we're really at that stage with most drug targets of any type, and trying to do it against structures with a lot of nucleic acid character is particularly hard. The computational methods for those are at an earlier stage than the ones we have for proteins.
One other note: every time a Nobel is awarded, the thoughts go to the people who worked in the same area, but missed out on the citation. The three-recipients-max stipulation makes this a perpetual problem. This is outside my area of specialization, but if I had to list some people that just missed out here, I'd have to cite Harry Noller of UC-Santa Cruz and Marina Rodnina of Göttingen. Update: add Peter Moore of Yale as well. All of them work in this exact same area, and have made many real contributions to it - and I'm sure that there are others who could go on this list as well.
One last note: five Chemistry awards out of the last seven, by my count, have gone to fundamental discoveries in cell or protein biology. That's probably a reasonable reflection of the real world, but it does rather cut down on the number of chemists who can expect to have their accomplishments recognized. The arguing about this issue is not be expected to cease any time soon.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Current Events | Infectious Diseases
October 5, 2009
As many had expected, a Nobel Prize has been awarded to Elizabeth Blackburn (of UCSF), Carol Greider (of Johns Hopkins), and Jack Szostak (of Harvard Medical School/Howard Hughes Inst.) for their work on telomerase. Blackburn had been studying telomeres since her postdoc days in the late 1970s, and she and Szostak worked together in the field in the early 1980s, collarborating from two different angles. Greider (then a graduate student in Blackburn's lab) discovered the telomerase enzyme in 1984. She's continued to work in the area, as well she might, since it's been an extremely interesting and important one.
Telomeres, as many readers will know, are repeating DNA stretches found on the end of chromosomes. It was realized in the 1970s that something of this kind needed to be there, since otherwise replication of the chromosomes would inevitably clip off a bit from the end each time (the enzymes involved can't go all the way to the ends of the strands). Telomeres are the disposable buffer regions, which distinguish the natural end of a chromosome from a plain double-stranded DNA break.
What became apparent, though was that the telomerase complex often didn't quite compensate for telomere shortening. This provides a mechanism for limiting the number of cell divisions - when the telomeres get below a certain length, further replication is shut down. Telomerase activity is higher in stem cells and a few other specialized lines. This means that the whole area must be a key part of both cellular aging and the biology of cancer. In a later post, I'll talk about telomerase as a drug target, a tricky endeavour that straddles both of those topics.
It's no wonder that this work has attracted the amount of attention it has, and it's no wonder either that it's the subject of a well deserved Nobel. Congratulations to the recipients!
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Cancer | Current Events
September 11, 2009
Readers may remember a study from earlier this year that suggested that taking antioxidants canceled out some of the benefits of exercise. It seems that the reactive oxygen species themselves, which everyone's been assuming have to be fought, are actually being used to signal the body's metabolic changes.
Now there's another disturbing paper on a possible unintended effect of antioxidant therapy. Joan Brugge and her group at Harvard published last month on what happens to cells when they're detached from their normal environment. What's supposed to happen, everyone thought, is apoptosis, programmed cell death. Apoptosis, in fact, is supposed to be triggered most of the time when a cell detects that something has gone seriously wrong with its normal processes, and being detached from its normal signaling environment (and its normal blood supply) definitely qualifies. But cancer cells manage to dodge that difficulty, and since it's known that they also get around other apoptosis signals, it made sense that this was happening here, too.
But there have been some recent reports that cast doubt on apoptosis being the only route for detached cell death. This latest study confirms that, but goes on to a surprise. When this team blocked apoptotic processes, detached cells died anyway. A closer look suggested that the reason was, basically, starvation. The cells were deprived of nutrients after being dislocated, ran out of glucose, and that was that. This process could be stopped, though, if a known oncogene involved in glucose uptake (ERBB2) was activated, which suggests that one way a cancer cells survive their travels is by keeping their fuel supply going.
So far, so good - this all fits in well with what we already know about tumor cells. But this study found that there was another way to keep detached cells from dying: give them antioxidants. (They used either N-acetylcysteine or a water-soluble Vitamin E derivative). It appears that oxidative stress is one thing that's helping to kill off wandering cells. On top of this effect, reactive oxygen species also seem to be inhibiting another possible energy source, fatty acid oxidation. Take away the reactive oxygen species, and the cells are suddenly under less pressure and have access to a new food source. (Here's a commentary in Nature that goes over all this in more detail, and here's one from The Scientist).
They went on to use some good fluorescence microscopy techniques to show that these differences in reactive oxygen species are found in tumor cell cultures. There are notable metabolic differences between the outer cells of a cultured tumor growth and its inner cells (the ones that can't get so much glucose), but that difference can be smoothed out by. . .antioxidants. The normal process is for the central cells in such growths to eventually die off (luminal clearance), but antioxidant treatment kept this from happening. Even more alarmingly, they showed that tumor cells expressing various oncogenes colonized an in vitro cell growth matrix much more effectively in the presence of antioxidants as well.
This looks like a very strong paper to me; there's a lot of work in it and a lot of information. Taken together, these results suggest a number of immediate questions. Is there something that shuts down normal glucose uptake when a cell is detached, and is this another general cell-suicide mechanism? How exactly does oxidative stress keep these cells from using their fatty acid oxidation pathway? (And how does that relate to normally positioned cells, in which fatty acid oxidation is actually supposed to kick in when glucose supplies go down?)
The biggest questions, though, are the most immediate: first, does it make any sense at all to give antioxidants to cancer patients? Right now, I'd very much have to wonder. And second, could taking antioxidants actually have a long-term cancer-promoting effect under normal conditions? I'd very much like to know that one, and so would a lot of other people.
After this and that exercise study, I'm honestly starting to think that oxidative stress has been getting an undeserved bad press over the years. Have we had things totally turned around?
+ TrackBacks (0) | Category: Biological News | Cancer
September 8, 2009
Imagine a drug molecule, and imagine it's a really good one. That is, it's made it out of the gut just fine, out into the bloodstream, and it's even slipped in through the membrane of the targeted cells. Now what?
Well, "cells are gels", as Arthur Kornberg used to say, and he was right. There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like, and that's what any drug molecule has to be able to do as well. And there's no particular reason for most of them to go anywhere particular inside the cell, once they're inside. They just diffuse around until they hit their targets, to which they stick (which is something they'd better do).
What if things didn't work this way? What if you could micro-inject your drug right into a particular cell compartment, or have it target a particular cell structure, instead of having to mist it all over the place? We now have a good answer to that question, but how much good it's going to do us drug discoverers is another thing entirely.
I'm referring to this paper from JACS, from a group at the University of Tokyo. They're targeting the important signaling enzyme PI3K. That's downstream of a lot of things, and in this case they used the PDGFR receptor in the cells, and a phosphorylated peptide that's a known ligand. To make the peptide go where they wanted, though, they further engineered both the ligand and the cells. The cells got modified by expression of dihydrofolate reductase (DHFR) in their plasma membranes, and the peptide ligand was conjugated to trimethoprim (TMP). TMP has a very strong association with DHFR, so this system was being used as an artificial targeting method. (It's as if the cell had been built up with hook-bearing Velcro on the inside of its plasma membrane, and the PI3K ligand was attached to a strip of the fuzzy side). Then to see what was going on, they also attached a fluorescent ligand to the peptide ligand as well.
Of course, this ligand-TMP-fluorescent fusion beast wasn't the best candidate for getting into a cell on its own, so the team microinjected it. And the results were dramatic. Normally, stimulating the PDGFR receptor in these cells led to downstream signaling in less than one minute. In cells that didn't have the DHFR engineered into their membranes, the fluorescent ligand could be seen diffusing through the whole cytosol, and giving a very weak PDGFR response. But in the cells with the targeting system built in, the ligand immediately seemed to stick to the inside of the plasma membrane, as planned, and a very robust, quick response was seen.
The paper details a number of control experiments that I'm not going into here, and I invite the curious to read the whole thing. I'm convinced, though, that the authors are seeing what they hoped to see. In other words, ligands which aren't worth much when they have to diffuse around on their own can be real tigers when they're dragged directly to their site of action. It makes sense that this would be true, but it's nice to see it demonstrated for real. I'll quote the last paragraph of the paper, though, because that's where I have some misgivings:
In summary, we have demonstrated that it is feasible to rapidly and efficiently activate an endogenous signaling pathway by placing a synthetic ligand at a specific location within a cell. The strategy should be applicable to other endogenous proteins and pathways through the choice of appropriate ligand molecules. More significantly, this proof-of-principle study highlights the importance of controlling the subcellular locales of molecules in the design of new synthetic modulators of intracellular biological events. There might be a number of compounds (not only activators but also inhibitors) that have been dismissed but may acquire potent biological activities when they are endowed with subcellular-targeting functions. Our next challenge is to develop cell-permeable carriers capable of delivering cargo ligands to specifically defined regions or organelles inside cells.
Where they lost me was in pointing out how important this is in designing new compounds. The problem is, these are very artificial, highly engineered cells. Everything's been set up to make them do just what you want them to do. If you don't cause them to express boatloads of DHFR in their membrane, nothing works. So what lessons does this have for a drug discovery guy like me? I'm not targeting cells that have been striped with convenient Velco patches.
And even if I find something endogenous that I can use, I can't make molecules that have to be delivered through the cell membrane by microinjection. You can see from the last sentence, though, that the authors realize that part as well. But that "next challenge" they speak of is more than enough to keep them occupied for the rest of their working lives. These kinds of experiments are important - they teach us a lot about cell biology, and there's sure a lot more of that to be learned. But the cells won't give up their secrets without a fight.
+ TrackBacks (0) | Category: Biological News
August 20, 2009
It's hard to think of a more important class of drug targets than the G-protein coupled receptors (GPCRS). And back about fifteen years ago, I thought I had a reasonable understanding of how they worked. I was quite wrong, even given the standards of knowledge at the time, but since then the GPCR world has become gradually crazier and crazier.
The classic way of thinking about these receptors is that they live up on the cell surface, with part of the protein on the outside and part on the inside. The inside face is associated with various G-proteins, and the outside face has a binding site for some sort of signaling molecule. If the right molecule shows up and slots in the correct way into this binding cavity, the transmembrane helices of the protein rearrange, sliding around to change the shape and binding properties down there at the G-protein interface. This sets off some intracellular messaging - often by affecting levels of the messenger molecule cyclic-AMP. Thus is a signal from outside the cell relayed through the membrane to the inside.
Pretty nearly makes sense, doesn't it? Well, take a look at this new report from PLoS Biology. The authors rigged up living cells with a built-in fluorescent sensor system to monitor cAMP, and then studied the behavior of the thyroid-stimulating-hormone (TSH) receptor. That's a perfectly reasonable protein-ligand GPCR, but it turns out that it does things that are not (to us) perfectly reasonable.
This paper shows that when a TSH molecule binds, that the receptor gets taken back down through the membrane into the cell. That's certainly a known process (internalization), and was thought to be a regulatory process, a standard method for taking a specific GPCR out of the signaling business. Some receptors seem to do this right after they're used, and of those, some of them later resurface and some are broken up. (Other types hang around for many cycles until they're somehow worn out). But the ones that internalize quickly still set off their intracellular message before they get pulled back down. That's their purpose in life.
TSH does that. But the weird part is that the authors saw the receptor internalize along with its G-protein partners, and then continue signaling from inside the cell. Not only that, this extra signaling behavior set off somewhat different responses as compared to the first "normal" burst, and seems to be a necessary part of the usual TSH signaling pathway. It's a very odd thought, if you're used to thinking about GPCRs - it's like finding out that your cell phone works when it's turned off.
Now this sort of behavior has been demonstrated for a different class of signaling proteins (the tyrosine kinase receptors). And even GPCRs have been found, over the last few years, to be capable of setting off a different signaling regime (the MAP kinase pathway) after they've been internalized. (That's one of the weird findings of recent years that I mentioned in the introductory paragraph, and we still don't know what to do with that one as far as drug discovery goes). But everyone agreed that at least the good ol' cyclic AMP pathway worked the way we thought it did, through signaling at the cell surface, and thank goodness there was something you could still count on in this world.
Hah. Now we're going to have to see how many other GPCRs show this kind of behavior, and under what circumstances, and why. It may well turn out to be different for different cells or for different signaling ligands, or only occur under certain conditions. And we'll have to see how this relates to the other strange things that are being unraveled about GPCR behavior - they way that they can dimerize, with themselves or even other receptors, out on the cell surface, and the way that some of them seem to work in an opposite-sign signaling regime (always on, until something turns them off). Do these things still signal from beneath the waves, too?
Oh, this will keep the receptor folks busy, as if they weren't already. And, as usual when something like this shows up, it should serve as a reminder to anyone who thinks that we understand even the well-worked-out parts of cell biology. Hah!
+ TrackBacks (0) | Category: Biological News
August 18, 2009
I see that there's a serious effort underway to standardize biochemical diagrams. About time! As a chemist, I don't mind admitting that I've been confused by many of these things over the years. As the current task force points out, one reason for that is that there are too many processes that all get drawn the same way: with a curved arrow. Enzymatic cleavage? Allosteric regulation? Product inhibition? Nucleic acid splicing? Enzyme activation? A curvy arrow should do nicely. And if the same scheme includes several of those phenomena at once, then we'll just use more arrows, making sure, of course, that they're all exactly the same size and style.
The new proposal seems to be based on the ideas behind electrical circuit diagrams and flow-chart conventions, and will attempt to convey information through several means (box shapes, arrow styles, etc.) I hope it, or something like it, actually catches on, although it'll take me a while to get used to translating it. Actually, what will take a while is getting used to the idea that biological diagrams are supposed to be imparting information at all. I've been trained in the other direction for too long.
+ TrackBacks (0) | Category: Biological News
August 11, 2009
I was looking over a paper in PNAS, where a group at Stanford describes finding several small molecules that inhibit Hedgehog signaling. That's a very interesting (and ferociously complex) area, and the more tools that are available to study it, the better.
But let me throw something out to those who have read (or will read) the paper. (Here's the PDF, which is open access). The researchers seem to have done a screen against about 125,000 compounds, and come up with four single-digit micromolar hits. Characterizing these against a list of downstream assays showed that each of these acts in a somewhat different manner on the Hedgehog pathway.
And that's fine - the original screen would have picked up a variety of mechanisms, and there certainly are a variety out there to be picked up. I can believe that a list of compounds would differentiate on closer inspection. What I keep looking for, though, is (first) a mention that these compounds were run through some sort of general screening panel for other enzyme and/or receptor activities. They did look for three different kinase activities that had been shown to interfere (and didn't see them), but I'd feel much better about using some new structures as probes if I'd run them through a big panel of secondary assays first.
Second, I've been looking for some indication that there might have been some structure-activity relationships observed. I assume that each of these compounds might well have been part of a series - so how did the related structures fare? Having a one-off compound doesn't negate the data, naturally, although it certainly does make it harder to build anything from the hit you've found. But SAR is another factor that I'd immediately look for after a screen, and it seems strange to me that I can't find any mention of it.
Have I missed these things, or are they just not there? If they aren't, is that a big deal, or not? Thoughts?
+ TrackBacks (0) | Category: Biological News | Drug Assays
July 7, 2009
While we're on the topic of hydrogen bonds and computations, there's a paper coming out in JACS that attempts to answer an old question. Why, exactly, does every living thing on earth use so much ribose? It's the absolute, unchanging carbohydrate backbone to all the RNA on Earth, and like the other things in this category (why L amino acids instead of D?), it's attracted a lot of speculation. If you subscribe to the RNA-first hypothesis of the origins of life, then the question becomes even more pressing.
A few years ago, it was found that ribose, all by itself, diffuses through membranes faster than the other pentose sugars. This results holds up for several kinds of lipid bilayers, suggesting that it's not some property of the membrane itself that's at work. So what about the ability of the sugar molecules to escape from water and into the lipid layers?
Well, they don't differ much in logP, that's for sure, as the original authors point out. This latest paper finds, though, by using molecular dynamic simulations that there is something odd about ribose. In nonpolar environments, its hydroxy groups form a chain of hydrogen-bond-like interactions, particularly notable when it's in the beta-pyranose form. These aren't a factor in aqueous solution, and the other pentoses don't seem to pick up as much stabilization under hydrophobic conditions, either.
So ribose is happier inside the lipid layer than the other sugars, and thus pays less of a price for leaving the aqueous environment, and (both in simulation and experimentally) diffuses across membranes ten times as quickly as its closely related carboyhydate kin. (Try saying that five times fast!) This, as both the original Salk paper and this latest one note, leads to an interesting speculation on why ribose was preferred in the origins of life: it got there firstest with the mostest. (That's a popular misquote of Nathan Bedford Forrest's doctrine of warfare, and if he's ever come up before in a discussion of ribose solvation, I'd like to hear about it).
+ TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It
June 22, 2009
We organic chemists have it easy compared to the cell culture people. After all, our reactions aren't alive. If we cool them down, they slow down, and if we heat them up, they'll often pick up where they left off. They don't grow, they don't get infected, and they don't have to be fed.
Cells, though, are a major pain. You can't turn your back on 'em. Part of the problem is that there are, as yet, no cells that have evolved to grow in a dish or a culture bottle. Everything we do to them is artificial, and a lot of it what we ask cultured cells to do is clearly not playing to their strengths. Ask Genzyme: they use the workhorse CHO (Chinese Hamster Ovary) cells to produce their biologics, but they've been having variable yield problems over the past few months. Now it turns out that their production facilities are infected with Vesivirus 2117 - I'd never heard of that one, but it interferes with CHO growth, and that's bringing Genzyme's workflow to a halt. (No one's ever reported human infection with that one, just to make that clear).
I assume that the next step is a complete, painstaking cleanup and decontamination. That's going to affect supplies of Cerezyme (imiglucarase) and Frabazyme (agalsidase) late in the summer and into the fall, although it's not clear yet how long the outage will be. Any cell culture lab that's had to toss things due to mycoplasms or other nasties will sympathize, and shudder at the thought of cleaning things up on this scale.
+ TrackBacks (0) | Category: Biological News | Drug Development
May 13, 2009
Now, this is an example of an idea being followed through to its logical conclusion. Here’s where we start: the good effects of exercise are well known, and seem to be beyond argument. Among these are marked improvements in insulin resistance (the hallmark of type II diabetes) and glucose uptake. In fact, exercise, combined with losing adipose weight, is absolutely the best therapy for mild cases of adult-onset diabetes, and can truly reverse the condition, an effect no other treatment can match.
So, what actually causes these exercise effects? There has to be a signal (or set of signals) down at the molecular level that tells your cells what’s happening, and initiates changes in their metabolism. One good candidate is the formation of reactive oxygen species (ROS) in the mitochondria. Exercise most certainly increases a person’s use of oxygen, and increases the work load on the mitochondria (since that’s where all the biochemical energy is coming from, anyway). Increased mitochondrial formation of ROS has been well documented, and they have a lot of physiological effects.
Of course, ROS are also implicated in many theories of aging and cellular damage, which is why cells have several systems to try to soak these things up. That’s exactly why people take antioxidants, vitamin C and vitamin E especially. So. . .what if you take those while you’re exercising?
A new paper in PNAS askes that exact question. About forty healthy young male volunteers took part in the study, which involved four weeks of identical exercise programs. Half of the volunteers were already in athletic training, and half weren’t. Both groups were then split again, and half of each cohort took 1000 mg/day of vitamin C and 400 IU/day vitamin E, while the other half took no antioxidants at all. So, we have the effects of exercise, plus and minus previous training, and plus and minus antioxidants.
And as it turns out, antioxidant supplements appear to cancel out many of the beneficial effects of exercise. Soaking up those transient bursts of reactive oxygen species keeps them from signaling. Looked at the other way, oxidative stress could be a key to preventing type II diabetes. Glucose uptake and insulin sensitivity aren't affected by exercise if you're taking supplementary amounts of vitamins C and E, and this effect is seen all the way down to molecular markers such as the PPAR coactivator proteins PGC1 alpha and beta. In fact, this paper seems to constitute strong evidence that ROS are the key mediators for the effects of exercise, and that this process is mediated through PGC1 and PPAR-gamma. (Note that PPAR-gamma is the target of the glitazone class of drugs for type II diabetes, although signaling in this area is notoriously complex).
Interestingly, exercise also increases the body's endogenous antioxidant systems - superoxide dismutase and so on. These are some of the gene targets of PPAR-gamma, suggesting that these are downstream effects. Taking antioxidant supplements kept these from going up, too. All these effects were slightly more pronounced in the group that hadn't been exercising before, but were still very strong across the board.
This confirms the suspicions raised by a paper from a group in Valencia last year, which showed that vitamin C supplementation seemed to decrease the development of endurance capacity during an exercise program. I think that there's enough evidence to go ahead and say it: exercise and antioxidants work against each other. The whole take-antioxidants-for-better-health idea, which has been taking some hits in recent years, has just taken another big one.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News | Cardiovascular Disease | Diabetes and Obesity
May 1, 2009
One of Merck’s less wonderful recent experiences was the rejection of Cordaptive, which was an attempt to make a niacin combination for the cardiovascular market. Niacin would actually be a pretty good drug to improve lipid profiles if people could stand to take the doses needed. But many people experience a burning, itchy skin flush that’s enough to make them give up on the stuff. And that’s too bad, because it’s the best HDL-raising therapy on the market. It also lowers LDL, VLDL, free fatty acids, and tryglycerides, which is a pretty impressive spectrum. So it’s no wonder that Merck (and others) have tried to find some way to make it more tolerable.
A new paper suggests that everyone has perhaps been looking in the wrong place for that prize. A group at Duke has found that the lipid effects and the cutaneous flushing are mechanistically distinct, way back at the beginning of the process. There might be a new way to separate the two.
Niacin’s target seems to be the G-protein coupled receptor GPR109A – and, unfortunately, that seems to be involved in the flushing response, since both that and the lipid effects disappear if you knock out the receptor in a mouse model. The current model is that activation of the receptor produces the prostaglandin PGD2 (among other things), and that’s what does the skin flush, when it hits its own receptor later on. Merck’s approach to the side effect was the block the PGD2 receptor by adding an antagonist drug for it along with the niacin. But taking out the skin flush at that point means doing it at nearly the last possible step.
The Duke team has looked closely at the signaling of the GPR109A receptor and found that beta-arrestins are involved (they’ve specialized in this area over the last few years). The arrestins are proteins that modify receptor signaling through a variety of mechanisms, not all of which are well understood. Wew’ve known about signaling through the G-proteins for many years (witness the name of the whole class of receptors), but beta-arrestin-driven signaling is a sort of alternate universe. (GPCRs have been developing quite a few alternate universes – the field was never easy to understand, but it’s becoming absolutely baroque).
As it turns out, mice that are deficient in either beta-arrestin 1 or beta-arrestin 2 show the same lipid effects in response to niacin dosing as normal mice. But the mice lacking much of their beta-arrestin 1 protein show a really significant loss of the flushing response, suggesting that it’s mediated through that signaling pathway (as opposed to the “normal” G-protein one). And a known GPR109A ligand that doesn’t seem to cause so much skin flushing (MK-0354) fit the theory perfectly: it caused G-protein signaling, but didn’t bring in beta-arrestin 1.
So the evidence looks pretty good here. This all suggests that screening for compounds that hit the receptor but don’t activate the beta-arrestin pathway would take you right to the pharmacology you want. And I suspect that several labs are going to now put that idea to the test, since beta-arrestin assays are also being looked at in general. . .
+ TrackBacks (0) | Category: Biological News | Cardiovascular Disease | Toxicology
April 29, 2009
What a mess! Science has a retraction of a 2005 paper, which is always a nasty enough business, but in this case, the authors can’t agree on whether it should be retracted or not. And no one seems to be able to agree on whether the original results were real, and (even if they weren’t) whether the technique the paper describes works anyway. Well.
The original paper (free full text), from two Korean research groups, described a drug target discovery technique with the acronym MAGIC (MAGnetism-based Interaction Capture). It’s a fairly straightforward idea in principle: coat a magnetic nanoparticle with a molecule whose target(s) you’re trying to identify. Now take cell lines whose proteins have had various fluorescent tags put on them, and get the nanoparticles into them. If you then apply a strong magnetic field to the cells, the magnetic particles will be pulled around, and they’ll drag along whichever proteins have associated with your bait molecule. Watch the process under a microscope, and see which fluorescent spots move in which cells.
Papers were published (in both Science and Nature Chemical Biology), patent applications were filed (well, not in that order!), startup money was raised for a company to be called CGK. . .and then troubles began. Word was that the technique wasn’t reproducible. One of the authors (Yong-Weon Yi) asked that his name be removed from the publications, which was rather problematic of him, considering that he was also an inventor on the patent application. Early last year, investigations by the Korean Advanced Institute of Science and Technology came to the disturbing conclusion that the papers “do not contain any scientific truth”, and the journals flagged them.
The Nature Chemical Biology paper was retracted last July, but the Science paper has been a real rugby scrum, as the journal details here. The editorial staff seems to have been unable to reach one of the authors (Neoncheol Jung), and they still don’t know where he is. That’s disconcerting, since he’s still listed as the founding CEO of CGK. A complex legal struggle has erupted between the company and the KAIST about who has commercial rights to the technology, which surely isn’t being helped along by the fact that everyone is disagreeing about whether it works at all, or ever has. Science says that they’ve received parts of the KAIST report, which states that the authors couldn’t produce any notebooks or original data to support any of the experiments in the paper. This is Most Ungood, of course, and on top of that, two of the authors also appear to have stated that the key experiments (where they moved the fluorescent proteins around) were not carried out as the paper says. Meanwhile, everyone involved is now suing everyone else back in Korea for fraud, for defamation, and who knows. The target date for all this to be resolved is somewhere around the crack of doom.
Emerging from the fiery crater, CGK came up with another (very closely related) technique, which they published late last year in JACS. (If nothing else, everyone involved is certainly getting their work into an impressive list of journals. If only the papers wouldn’t keep sliding right back out. . .) That one has stood up so far, but it’s only April. I presume that the editorial staff at JACS asked for all kinds of data in support, but (as this whole affair shows) you can’t necessarily assume that everyone’s doing the job they’re supposed to do.
The new paper, most interestingly, does not reference the previous work at all, which I suppose makes sense on one level. But if you just came across it de novo, you wouldn't realize that people (at the same company!) had already been (supposedly) working on magnetic particle assays in living cells. Looking over this one and comparing it to the original Science paper, one of the biggest differences seems to be how the magnetic particles are made to expose themselves to the cytoplasm. The earlier work mentioned coating the particles with a fusogenic protein (TAT-HA2) that was claimed to help with this process; that step is nowhere to be found in the JACS work. Otherwise, the process looks pretty much identical to me.
Let’s come up for air, then, and ask how well useful these ideas could be, stipulating (deep breath) that they work. Clearly, there’s some utility here. But I have to wonder how useful this protocol will be for general target fishing expeditions. Fluorescent labeling of proteins is indeed one of the wonders of the world (and was the subject of a recent a well-deserved Nobel prize). But not all proteins can be labeled without disturbing their function – and if you don’t know what the protein’s up to in the first place, you’re never sure if you’ve done something to perturb it when you add the glowing parts. There are also a lot of proteins, of course, to put it mildly, and if you don’t have any idea of where to start looking for targets, you still have a major amount of work to do. The cleanest use I can think of for these experiments is verifying (or ruling out) hypotheses for individual proteins.
But that's if it works. And at this point, who knows? I'll be very interested to follow this story, and to see if anyone else picks up this technique and gets it to work. Who's brave enough?
+ TrackBacks (0) | Category: Biological News | Drug Assays | The Dark Side | The Scientific Literature
April 17, 2009
So I see that the headlines are that it’s proving difficult to relate gene sequences to specific diseases. (Here's the NEJM, free full-text). I can tell you that the reaction around the drug industry to this news is a weary roll of the eyes and a muttered “Ya don’t say. . .”
That’s because we put our money down early on the whole gene-to-disease paradigm, and in a big way. As I’ve written here before, there was a real frenzy in the industry back in the late 1990s as the genomics efforts started really revving up. Everyone had the fear that all the drug targets that ever were, or ever could be, were about to be discovered, annotated, patented – and licensed to the competition, who were out there fearless on the cutting edge, ready to leap into the future, while we (on the other hand) lounged around like dinosaurs looking sleepily at that big asteroidy thing up there in the sky.
No, that’s really how it felt. Every day brought another press release about another big genomics deal. The train (all the trains!) were loudly leaving the station. A lot of very expensive deals were cut, sometimes in great haste, but (as far as I can tell) they yielded next to nothing – at least in terms of drug candidates, or even real drug targets themselves.
So yeah, we’ve already had a very expensive lesson in how hard it is to associate specific gene sequences with specific diseases. The cases where you can draw a dark, clear line between the two increasingly look like exceptions. There are a lot of these (you can read about them
in these texts
), but they tend to affect small groups of people at a time. The biggest diseases (diabetes, cardiovascular in general, Alzheimer’s, most cancers) seem to be associated with a vast number of genetic factors, most of them fairly fuzzy, and hardly any of them strong enough on their own to make a big difference one way or another. Combine that with the nongenetic (or epigenetic) factors like nutrition, lifestyle, immune response, and so on, and you have a real brew.
On that point, I like E. O. Wilson’s metaphor for nature versus nurture. He likened a person’s genetic inheritance to a photographic negative. Depending on how it’s developed and printed, the resulting picture can turn out a lot of different ways – but there’s never going to be more than was in there to start with. (These days, I suppose that we’re going to have to hunt for another simile – Photoshop is perhaps a bit too powerful to let loose inside that one).
But I've been talking mostly about variations in proteins as set by their corresponding DNA sequences. The real headscratcher has been this:
One observation that has taken many observers by surprise is that most loci that have been discovered through genomewide association analysis do not map to amino acid changes in proteins. Indeed, many of the loci do not even map to recognizable protein open reading frames but rather may act in the RNA world by altering either transcriptional or translational efficiency. They are thus predicted to affect gene expression. Effects on expression may be quite varied and include temporal and spatial effects on gene expression that may be broadly characterized as those that alter transcript levels in a constitutive manner, those that modulate transcript expression in response to stimuli, and those that affect splicing.
That's really going to be a major effort to understand, because we clearly don't understand it very well now. RNA effects have been coming on for the last ten or fifteen years as a major factor in living systems that we really weren't aware of, and it would be foolish to think that the last fireworks have gone off.
+ TrackBacks (0) | Category: Biological News | Drug Industry History
March 26, 2009
So, people like me spend their time trying to make small molecules that will bind to some target protein. So what happens, anyway, when a small molecule binds to a target protein? Right, right, it interacts with some site on the thing, hydrogen bonds, hydrophobic interactions, all that – but what really happens?
That’s surprisingly hard to work out. The tools we have to look at such things are powerful, but they have limitations. X-ray crystal structures are great, but can lead you astray if you’re not careful. The biggest problem with them, though (in my opinion) is that you see this beautiful frozen picture of your drug candidate in the protein, and you start to think of the binding as. . .well, as this beautiful frozen picture. Which is the last thing it really is.
Proteins are dynamic, to a degree that many medicinal chemists have trouble keeping in mind. Looking at binding events in solution is more realistic than looking at them in the crystal, but it’s harder to do. There are various NMR methods (here's a recent review), some of which require specially labeled protein to work well, but they have to be interpreted in the context of NMR’s time scale limitations. “Normal” NMR experiments give you time-averaged spectra – if you want to see things happening quickly, or if you want to catch snapshots of the intermediate states along the way, you have a lot more work to do.
Here’s a recent paper that’s done some of that work. They’re looking at a well-known enzyme, dihydrofolate reductase (DHFR). It’s the target of methotrexate, a classic chemotherapy drug, and of the antibiotic trimethoprim. (As a side note, that points out the connections that sometimes exist between oncology and anti-infectives. DHFR produces tetrahydrofolate, which is necessary for a host of key biosynthetic pathways. Inhibiting it is espccially hard on cells that are spending a lot of their metabolic energy on dividing – such as tumor cells and invasive bacteria).
What they found was that both inhibitors do something similar, and it affects the whole conformational ensemble of the protein:
". . .residues lining the drugs retain their μs-ms switching, whereas distal loops stop switching altogether. Thus, as a whole, the inhibited protein is dynamically dysfunctional. Drug-bound DHFR appears to be on the brink of a global transition, but its restricted loops prevent the transition from occurring, leaving a “half-switching” enzyme. Changes in pico- to nanosecond (ps-ns) backbone amide and side-chain methyl dynamics indicate drug binding is “felt” throughout the protein.
There are implications, though, for apparently similar compounds having rather different effects out in the other loops:
. . .motion across a wide range of timescales can be regulated by the specific nature of ligands bound. Occupation of the active site by small ligands of different shapes and physical characteristics places differential stresses on the enzyme, resulting in differential thermal fluctuations that propagate through the structure. In this view, enzymes, through evolution, develop sensitivities to ligand properties from which mechanisms for organizing and building such fluctuations into useful work can arise. . .Because the affected loop structures are primarily not in contact with drug, it is reasonable to envision inhibitory small-molecule drugs that act by allosterically modulating dynamic motions."
There are plenty of references in the paper to other investigations of this kind, so if this is your sort of thing, you'll find plenty of material there. One thing to take home, though, is to remember that not only are proteins mobile beasts (with and without ligand bound to them), but that this mobility is quite different in each state. And keep in mind that the ligand-bound state can be quite odd compared to anything else the protein experiences otherwise. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical News | In Silico
March 24, 2009
I’ve written here before about the "click" triazole chemistry that Barry Sharpless’s group has pioneered out at Scripps. This reaction has been finding a lot of uses over the last few years (try this category for a few, and look for the word "click"). One of the facets I find most interesting is the way that they’ve been able to use this Huisgen acetylene/azide cycloaddition reaction to form inhibitors of several enzymes in situ, just by combining suitable coupling partners in the presence of the protein. Normally you have to heat that reaction up quite a bit to get it to go, but when the two reactants are forced into proximity inside the protein, the rate speeds up enough to detect a product.
Note that I said “inside the protein”. My mental picture of these things has involved binding-site cavities where the compounds are pretty well tied down. But a new paper from Jim Heath’s group at Cal Tech, collaborating with Sharpless and his team, demonstrates something new. They’re now getting this reaction to work out on protein surfaces, and in the process making what are basically artificial antibody-type binding agents.
To start with, they prepared a large library of hexapeptides out of the unnatural D-amino acids, in a one-bead-one-compound format. (Heath’s group has been working in this area for a while, and has experience dealing with these - see this PDF presentation for an overview of their research). Each peptide had an acetylene-containing amino acid at one end, for later use. They exposed these to a protein target: carbonic anhydrase II, the friend of every chemist who’s trying to make proteins do unusual things. The oligopeptide that showed the best binding to the protein’s surface was then incubated with the target CA II protein and another library of diverse hexapeptides. These had azide-containing amino acids at both ends, and the hope was that some of these would come close enough, in the presence of the protein, to react with the anchor acetylene peptide.
Startlingly, this actually worked. A few of the azide oligopeptides did do the click triazole-forming reaction. And the ones that worked all had related sequences, strongly suggesting that this was no fluke. What impresses me here is that (1) these things were lying on top of the protein, picking up what interactions they could, not buried inside a more restrictive binding site, and (2) the click reaction worked even though the binding constants of the two partners must not have been all the impressive. The original acetylene hexapeptide, in fact, bound at only 500 micromolar, and the other azide-containing hexapeptides that reacted with them were surely in the same ballpark.
The combined beast, though, (hexapeptide-triazole-hexapeptide) was a 3 micromolar compound. And then they took the thing through another round of the same process, decorating the end with a reactive acetylene and exposing it to the same azide oligopeptide library in the presence of the carbonic anhydrase target. The process worked again, generating a new three-oligopeptide structure which now showed 50 nanomolar binding. This increase in affinity over the whole process is impressive, but it’s just what you’d expect as you start combining pieces that have some affinity on their own. Importantly, when they made a library on beads by coupling the whole list of azide-containing hexapeptides with the biligand (through the now-standard copper-catalyzed reaction), the target CA II protein picked out the same sequences that were generated by the in situ experiment.
So what you have, in the end, is a short protein-like thing (actually three small peptides held together by triazole linkers) that has been specifically raised to bind a protein target – thus the comparison to antibodies above. What we don't know yet, of course, is just how this beast is binding to the carbonic anhydrase protein. It would appear to be stretched across some non-functional surface, though, because the triligand didn't seem to interfere with the enzyme's activity once it was bound. I'd be very interested in seeing if an X-ray structure could be generated for the triligand complex or any of the others. Heath's group is now apparently trying to generate such agents for other proteins and to develop assays based on them. I look forward to seeing how general the technique is.
This result makes a person wonder if the whole in situ triazole reaction could be used to generate inhibitors of protein-protein interactions. Doing that with small molecules is quite a bit different than doing it with hexapeptide chains, of course, but there may well be some hope. And there's another paper I need to talk about that bears on the topic; I'll bring that one up shortly. . .
+ TrackBacks (0) | Category: Biological News | Chemical News
March 4, 2009
Well, here’s another crack at open-source science. Stephen Friend, the previous head of Rosetta (before and after being bought by Merck), is heading out on his own to form a venture in Seattle called Sage. The idea is to bring together genomic studies from all sorts of laboratories into a common format and database, with the expectation that interesting results will emerge that couldn’t be found from just one lab’s data.
I’ll be interested to see if this does yield something worthwhile – in fact, I’ll be interested to see if it gets off the ground at all. As I’ve discussed before, the analogy with open-source software doesn’t hold up so well with most scientific research these days, since the entry barriers (facilities, equipment, and money) are significantly higher than they are in coding. Look at genomics – the cost of sequencing has been dropping, for sure, but it’s still very expensive to get into the game. That lowered cost is measured per base sequenced – today’s technology means that you sequence more bases, which means that the absolute cost hasn’t come down as much as you might think. I’m sure you can get ten-year-old equipment cheap, but it won’t let you do the kind of experiments you might want to do, at least not in the time you’ll be expected to do them in.
But even past that issue, once you get down to the many labs that can do high-level genomics (or to the even larger number that can do less extensive sequencing), the problems will be many. Sage is also going to look at gene expression levels, something that's easier to do (although we're still not in weekend-garage territory yet). Some people would say that it's a bit too easy to do: there are a lot of different techniques in this field, not all of which always yield comparable data, to put it mildly. There have been several attempts to standardize things, along with calls for more control experiments, but getting all these numbers together into a useful form will still not be trivial.
Then you've got the really hard issues: intellectual property, for one. If you do discover something by comparing all these tissues from different disease states, who gets to profit from it? Someone will want to, that's for sure, and if Sage itself isn't getting a cut, how will they keep their operation going? Once past that question (which is a whopper), and past all the operational questions, there's an even bigger one: is this approach going to tell us anything we can use at all?
At first thought, you'd figure that it has to. Gene sequences and gene expression are indeed linked to disease states, and if we're ever going to have a complete understanding of human biology, we're going to have to know how. But. . .we're an awful long way from that. Look at the money that's been poured into biomarker development by the drug industry. A reasonable amount of that has gone into gene expression studies, trying to find clear signs and correlations with disease, and it's been rough sledding.
So you can look at this two ways: you can say fine, that means that the correlations may well be there, but they're going to be hard to find, so we're going to have to pool as much data as possible to do it. Thus Sage, and good luck to them. Or the systems may be so complex that useful correlations may not even be apparent at all, at least at our current level of understanding. I'm not sure which camp I fall into, but we'll have to keep making the effort in order to find out who's right.
+ TrackBacks (0) | Category: Biological News | Drug Development
November 11, 2008
I wrote a while back about the problem of compounds sticking to labware. That sort of thing happens more often than you’d think, and it can really hose up your assay data in ways that will send you running around in circles. Now there’s a report in Science of something that’s arguably even worse. (Here's a good report on it from Bloomberg, one of the few to appear in the popular press).
The authors were getting odd results in an assay with monoamine oxidase B enzyme, and tracked it down to two compounds leaching out of the disposable plasticware (pipette tips, assay plates, Eppendorf vials, and so on). Oleamide is used as a “slip agent” to keep the plastic units from sticking to each other, but it’s also a MAO-B inhibitor. Another problem was an ammonium salt called DiHEMDA, which is put in as a general biocide – and it appears to be another MAO-B inhibitor.
Neither of them are incredibly potent, but if you’re doing careful kinetic experiments or the like, it’s certainly enough to throw things off. The authors found that just rinsing water through various plastic vessels was enough to turn the solution into an enzyme inhibitor. Adding organic solvents (10% DMSO, methanol) made the problem much worse; presumably these extract more contaminants.
And it’s not just this one enzyme. They also saw effects on a radioligand binding assay to the GABA-A receptor, and they point out that the biocides used are known to show substantial protein and DNA binding. These things could be throwing assay data around all over the place – and as we work in smaller and smaller volumes, with more complex protocols, the chances of running into trouble increase.
What to do about all this? Well, at a minimum, people should be sure to run blank controls for all their assays. That’s good practice, but sometimes it gets skipped over. This effect has probably been noted many times before as some sort of background noise in such controls, and many times you should be able to just subtract it out. But there are still many experiments where you can’t get away from the problem so easily, and it’s going to make your error bars wider no matter what you do about it. There are glass inserts for 96-well plates, and there are different plastics from different manufacturers. But working your way through all that is no fun at all.
As an aside, this sort of thing might still make it into the newspapers, since there have been a lot of concerns about bisphenol A and other plastic contaminants. In this case, I think the problem is far greater for lab assays than it is for human exposures. I’m not so worried about things like oleamide, since these are found in the body anyway, and can easily be metabolized. The biocides might be a different case, but I assume that we’re loaded with all kinds of substances, almost all of them endogenous, that are better inhibitors of enzymes like MAO-B. And at any rate, we’re exposed to all kinds of wild stuff at low levels, just from the natural components of our diet. Our livers are there to deal with just that sort of thing, but that said, it’s always worth checking to make sure that they’re up to the job.
+ TrackBacks (0) | Category: Biological News | Drug Assays
November 7, 2008
Systems biology – depending on your orientation, this may be a term that you haven’t heard yet, or one from the cutting edge of research, or something that’s already making you roll your eyes at its unfulfilled promise. There’s a good spread of possible reactions.
Broadly, I’d say that the field is concerned with trying to model the interactions of whole biological systems, in an attempt to come up with come explanatory power. It’s the sort of thing that you could only imagine trying to do with modern biological and computational techniques, but whether these are up to the job is still an open question. This gets back to a common theme that I stress around here, that biochemical networks are hideously, inhumanly complex. There’s really no everyday analogy that works to describe what they’re like, and if you think you really understand them, then you’re in the same position as all those financial people who thought they understood their exposure to mortgage-backed security risks.
You’ll have this enzyme, you see, that phosphorylates another enzyme, which increases its activity. But that product of that second enzyme inhibits another enzyme that acts to activate the first one, and each of them also interacts with fourteen (or forty-three) others, some of which are only expressed under certain conditions that we don’t quite understand, or are localized in the cell in patterns that aren’t yet clear, and then someone discovers a completely new enzyme in the middle of the pathway that makes hash out of what we thought we knew about
So my first test for listening to systems biology people is whether they approach things with the proper humility. There’s a good article in Nature on the state of the field, which does point out that some of the early big-deal-big-noise articles in the field alienated many potential supporters through just this effect. But work continues, and a lot of drug companies are putting money into it, under the inarguable “we need all the help we can get” heading.
One of the biggest investors has been Merck, a big part of that being their purchase a few years ago of Rosetta Inpharmatics. That group published an interesting paper earlier this year (also in Nature) on some of the genetic underpinnings of metabolic disease. A phrase from the article's abstract emphasizes the difficulties of doing this work: "Our analysis provides direct experimental support that complex traits such as obesity are emergent properties of molecular networks that are modulated by complex genetic loci and environmental factors." Yes, indeed.
But here’s a worrisome thing that didn’t make the article: Merck recently closed the Seattle base of the Rosetta team, in its latest round of restructuring and layoffs. One assumes that many of them are being transitioned to the Merck mothership, and that the company is still putting money into this approach, but there is room to wonder. Update: here's an article on this very subject). There is this quote from the recent overview:
Stephen Friend, Merck's vice-president for oncology, thinks that any hesitancy will be overcome when the modelling becomes so predictive that the toxicity and efficacy of a potential drug can be forecast very accurately even before an experimental animal is brought out if its cage. "The next three to five years will provide a couple such landmark predictions and wake everyone up," he says.
Well, we’ll see if he’s right about that timeframe, and I hope he is. I fear that the problem is one of those that appears large, and as you get closer to it, does nothing but get even larger. My opinion, for what it’s worth, is that it’s very likely too early to be able to come up with any big insights from the systems approach. But I can’t estimate the chances that I’m wrong about that, and the potential payoffs are large. For now, I think the best odds are in the smaller studies, narrowing down on single targets or signaling networks. That cuts down on the possibility that you’re going to find something revolutionary, but it increases the chance that anything you find is actually real. Talk of “virtual cells” and “virtual genomes” is, to my mind, way premature, and anyone who sells the technology in those terms should, I think, be regarded with caution.
But that said, any improvement is a big one. Our failure rates due to tox and efficacy problems are so horrendous that just taking some of these things down 10% (in real terms) would be a startling breakthrough. And we’re definitely not going to get this approach to work if we don’t plow money and effort into it; it’s not going to discover itself. So press on, systems people, and good luck. You’re going to need it; we all do.
+ TrackBacks (0) | Category: Biological News
October 31, 2008
Let’s talk sugar, and how you know if you’ve eaten enough of it. Just in time for Halloween! This is a field I’ve done drug discovery for in the past, and it’s a tricky business. But some of the signals are being worked out.
Blood glucose, as the usual circulating energy source in the body, is a good measure of whether you’ve eaten recently. If you skip a meal (or two), your body will start mobilizing fatty acids from your stored supplies, and circulate them for food. But there’s one organ that runs almost entirely on sugar, no matter what the conditions: the brain. Even if you’re fasting, your liver will make sugar from scratch for your brain to use.
And as you’d expect, brain glucose levels are one mechanism the body uses to decide whether to keep eating or not. A cascade of enzyme signals has been worked out over the years, and the current consensus seems to be that high glucose in the brain inactivates AMP kinase (AMPK). (That’s a key enzyme for monitoring the energy balance in the brain – it senses differences in concentration between ATP, the energy currency inside every cell, and its product and precursor, AMP). Losing that AMPK enzyme activity then removes the brakes on the activity of another enzyme, acetyl CoA-carboxylase (ACC). (That one’s a key regulator of fatty acid synthesis – all this stuff is hooked together wonderfully). ACC produces malonyl-CoA, and that seems to be a signal to the hypothalamus of the brain that you’re full (several signaling proteins are released at that point to spread the news).
You can observe this sort of thing in lab rats – if you infuse extra glucose into their brains, they stop eating, even under conditions when they otherwise would keep going. A few years ago, an odd result was found when this experiment was tried with fructose: instead of lowering food intake, infusing fructose into the central nervous system made the animals actually eat more. That’s not what you’d expect, since in the end, fructose ends up metabolized to the same thing as glucose does (pyruvate), and used to make ATP. So why the difference in feeding signals?
A paper in PNAS (open access PDF) from a team at Johns Hopkins and Ibaraki University in Japan now has a possible explanation. Glucose metabolism is very tightly regulated, as you’d expect for the main fuel source of virtually every living cell. But fructose is a different matter. It bypasses the rate-limiting step of the glucose pathway, and is metabolized much more quickly than glucose is. It appears that this fast (and comparatively unregulated) process actually uses up ATP in the hypothalamus – you’re basically revving up the enzyme machinery early in the pathway (ketohexokinase in particular) so much that you’re burning off the local ATP supply to run it.
Glucose, on the other hand, causes ATP levels in the brain to rise – which turns down AMPK, which turns up ACC, which allows malonyl-CoA to rise, and turns off appetite. But when ATP levels fall, AMPK is getting the message that energy supplies are low: eat, eat! Both the glucose and fructose effects on brain ATP can be seen at the ten-minute mark and are quite pronounced at twenty minutes. The paper went on to look at the activities of AMPK and ACC, the resulting levels of malonyl CoA, and everything was reversed for fructose (as opposed to glucose) right down the line. Even expression of the signaling peptides at the end of the process looks different.
The implications for human metabolism are clear: many have suspected that fructose could in fact be doing us some harm. (This New York Times piece from 2006 is a good look at the field: it's important to remember that this is very much an open question). But metabolic signaling could be altered by using fructose as an energy source over glucose. The large amount of high-fructose corn syrup produced and used in the US and other industrialized countries makes this an issue with very large political, economic, and public health implications.
This paper is compelling story – so, what are its weak points? Well, for one thing, you’d want to make sure that those fructose-metabolizing enzymes are indeed present in the key cells in the hypothalamus. And an even more important point is that fructose has to get into the brain. These studies were dropping it in directly through the skull, but that’s not how most people drink sodas. For this whole appetite-signaling hypothesis to work in the real world, fructose taken in orally would have to find its way to the hypothalamus. There’s some evidence that this is the case, but that fructose would have to find its way past the liver first.
On the other hand, it could be that this ATP-lowering effect could also be taking place in liver cells, and causing some sort of metabolic disruption there. AMPK and ACC are tremendously important enzymes, with a wide range of effects on metabolism, so there's a lot of room for things to happen. I should note, though, that activation of AMPK out in the peripheral tissues is thought to be beneficial for diabetics and others - this may be one route by which Glucophage (metformin) works. (Now some people are saying that there may be more than one ACC isoform out there, bypassing the AMPK signaling entirely, so this clearly is a tangled question).
I’m sure that a great deal of effort is now going into working out these things, so stay tuned. It's going to take a while to make sure, but if things continue along this path, there could be reasons for a large change in the industrialized human diet. There are a lot of downstream issues - how much fructose people actually consume, for one, and the problem of portion size and total caloric intake, no matter what form it's in, for another. So I'm not prepared to offer odds on a big change, but the implications are large enough to warrant a thorough check.
Update: so far, no one has been able to demonstrate endocrine or satiety differences in humans consuming high-fructose corn syrup vs. the equivalent amount of sucrose. See here, here, and here.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | The Central Nervous System
October 9, 2008
I’ve spoken before about the acetylene-azide “click” reaction popularized by Barry Sharpless and his co-workers out at Scripps. This has been taken up by the chemical biology field in a big way, and all sorts of ingenious applications are starting to emerge. The tight, specific ligation reaction that forms the triazole lets you modify biomolecules with minimal disruption (by hanging an azide or acetylene from them, both rather small groups), and tag them later on in a very controlled way.
Adrian Salic and co-worker Cindy Yao have just reported an impressive example. They’ve been looking at ethynyluracil (EU), the acetylene-modified form of the ubiquitous nucleotide found in RNA. If you feed this to living organisms, they take it up just as if it were uracil, and incorporate it into their RNA. (It’s uracil-like enough to not be taken up into DNA, as they’ve shown by control experiments). Exposing cells or tissue samples later on to a fluorescent-tagged azide (and the copper catalyst needed for quick triazole formation) lets you light up all the RNA in sight. You can choose the timing, the tissue, and your other parameters as you wish.
For example, Salic and Yao have exposed cultured cells to EU for varying lengths of time, and watched the time course of transcription. Even ten minutes of EU exposure is enough to see the nuclei start to light up, and a half hour clearly shows plenty of incoporation into RNA, with the cytoplasm starting to show as well. (The signal increases strongly over the first three hours or so, and then more slowly).
Isolating the RNA and looking at it with LC/MS lets you calibrate your fluorescence assays, and also check to see just how much EU is getting taken up. Overall, after a 24-hour exposure to the acetylene uracil, it looks like about one out of every 35 uracils in the total RNA content has been replaced with the label. There’s a bit less in the RNA species produced by the RNAPol1 enzyme as compared to the others, interestingly.
There are some other tricks you can run with this system. If you expose the cells for 3 hours, then wash the EU out of the medium and let them continue growing under normal conditions, you can watch the labeled RNA disappear as it turns over. As it turns out, most of it drops out of the nucleus during the first hour, while the cytoplasmic RNA seems to have a longer lifetime. If you expose the cells to EU for 24 hours, though, the nuclear fluorescence is still visible – barely – after 24 hours of washout, but the cytoplasmic RNA fluorescence never really goes away at all. There seems to be some stable RNA species out there – what exactly that is, we don’t know yet.
Finally, the authors tried this out on whole animals. Injecting a mouse with EU and harvesting organs five hours later gave some very interesting results. It worked wonderfully - whole tissue slices could be examined, as well as individual cells. Every organ they checked showed nuclear staining, at the very least. Some of the really transcriptionally active populations (hepatocytes, kidney tubules, and the crypt cells in the small intestine) were lit up very brightly indeed. Oddly, the most intense staining was in the spleen. What appear to be lymphocytes glowed powerfully, but other areas next to them were almost completely dark. The reason for this is unknown, and that’s very good news indeed.
That’s because when you come up with a new technique, you want it to tell you things that you didn’t know before. If it just does a better or more convenient job of telling you what you could have found out, that’s still OK, but it’s definitely second best. (And, naturally, if it just tells you what you already knew with the same amount of work, you’ve wasted your time). Clearly, this click-RNA method is telling us a lot of things that we don’t understand yet, and the variety of experiments that can be done with it has barely been sampled.
Closely related to this work is what’s going on in Carolyn Bertozzi’s lab in Berkeley. She’s gone a step further, getting rid of the copper catalyst for the triazole-forming reaction by ingeniously making strained, reactive acetylenes. They’ll spontaneously react if they see a nearby azide, but they’re still inert enough to be compatible with biomolecules. In a recent Science paper, her group reports feeded azide-substituted galactosamine to developing zebrafish. That amino sugar is well known to be used in the synthesis of glycoproteins, and the zebrafish embryos seemed to have no problem accepting the azide variant as a building block.
And they were able to run these same sorts of experiments – exposing the embryos to different concentrations of azido sugar, for different times, with different washout periods before labeling all gave a wealth of information about the development of mucin-type glycans. Using differently labled fluorescent acetylene reagents, they could stain different populations of glycan, and watch time courses and developmental trafficking – that’s the source of the spectacular images shown.
Losing the copper step is convenient, and also opens up possibilities for doing these reactions inside living cells (which is definitely something that Bertozzi’s lab is working on). The number of experiments you can imagine is staggering – here, I’ll do one off the top of my head to give you the idea. Azide-containing amino acids can be incorporated at specific places in bacterial proteins – here’s one where they replaced a phenylalanine in urate oxidase with para-azidophenylalanine. Can that be done in larger, more tractable cells? If so, why not try that on some proteins of interest – there are thousands of possibilities – then micro-inject one of the Bertozzi acetylene fluorescence reagents? Watching that diffuse through the cell, lighting things up as it found azide to react with would surely be of interest – wouldn’t it?
I’m writing about this the day after the green fluorescent protein Nobel for a reason, of course. This is a similar approach, but taken down to the size of individual molecules – you can’t label uracil with GFP and expect it to be taken up into RNA, that’s for sure. Advances in labeling and detection are one of the main things driving biology these days, and this will just accelerate things. (It’s also killing off a lot of traditional radioactive isotope labeling work, too, not that anyone’s going to miss it). For the foreseeable future, we’re going to be bombarded with more information than we know what to do with. It’ll be great – enjoy it!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
October 8, 2008
So it was green fluorescent protein after all! We can argue about whether this was a pure chemistry prize or another quasi-biology one, but either way, the award is a strong one. So, what is the stuff and what’s it do?
Osamu Shimomura discovered the actual protein back in 1962, isolating it from the jellyfish Aequoria victoria. These were known to be luminescent creatures, but when the light-emitting protein was found (named aequorin), it turned out to give off blue light. That was strange, since the jellyfish were known for their green color. Shimomura then isolated another protein from the same jellyfish cells, which turned out to absorb the blue light from aequorin very efficiently and then fluoresce in the green: green fluorescent protein. The two proteins are a coupled system, an excellent example of a phenomenon known as FRET (fluorescence resonance energy transfer), which has been engineered into many other useful applications over the years.
Fluorescence is much more common in inorganic salts and small organic molecules, and at first it was a puzzle how a protein could emit light in the same way. As it turns out, there’s a three-amino-acid sequence right in the middle of its structure (serine-tyrosine-glycine) that condenses with itself when the protein is folded properly and makes a new fluorescent species. (The last step of the process is reaction with ambient oxygen). The protein has a very pronounced barrel shape to it, and lines up these key amino acids in just the orientation needed for the reaction to go at a reasonable rate (on a time scale of tens of minutes at room temperature). This is well worked out now, but it was definitely not obvious at the time.
In the late 1980s, for example, the gene for GFP was cloned by Doug Prasher, but he and his co-workers believed that they could well express a non-fluorescent protein that would need activation by some other system. He had the idea that this could be used as a tag for other proteins, but was never able to get to the point of demonstrating it, and will join the list of people who were on the trail of a Nobel discovery but never quite got there. Update: Here's what Prasher is doing now - this is a hard-luck story if I've ever heard one Prasher furnished some of the clone to Martin Chalfie at Columbia, who got it to express in E. coli and found that the bacteria indeed glowed bright green. (Other groups were trying the same thing, but the expression was a bit tricky at the time). The next step was to express it in the roundworm C. elegans (naturally enough, since Chalfie had worked with Sydney Brenner). Splicing it in behind a specific promoter caused the GFP to express in definite patterns in the worms, just as expected. This all suggested that the protein was fluorescing on its own, and could do the same in all sorts of organisms under all sorts of conditions.
And so it’s proved. GFP is wonderful stuff for marking proteins in living systems. Its sequence can be fused on to many other proteins without disturbing their function, it folds up just fine with no help to its active form, and it’s bright and very photoefficient. Where Roger Tsien enters the picture is in extending this idea to a whole family of proteins. Tsien worked out the last details of the fluorescent structure, showing that oxygen is needed for the last step. He and his group then set out to make mutant forms of the protein, changing the color of its fluorescence and other properties. He’s done the same thing with a red fluorescent protein from coral, and this work (which continues in labs all over the world) has led to a wide variety of in vivo fluorescent tags, which can be made to perform a huge number of useful tricks. They can sense calcium levels or the presence of various metabolites, fluoresce only when they come into contact with another specifically labeled protein, used in various time-resolved techniques to monitor the speed of protein trafficking, and who knows what else. A lot of what we’ve learned in the last fifteen years about the behavior of real proteins in living cells has come out of this work – the prize is well deserved.
I want to close with a bit of an interview with Martin Chalfie, which is an excellent insight into how things like this get discovered (or don't!)
Considering how significant GFP has been, why do you think no one else came up with it, while you were waiting for Doug Prasher to clone it?
"That’s a very important point. In hindsight, you wonder why 50 billion people weren’t working on this. But I think the field of bioluminescence or, in general, the research done on organisms and biological problems that have no immediate medical implications, was not viewed as being important science. People were working on this, but it was slow and tedious work, and getting enough protein from jellyfish required rather long hours at the lab. They had to devise ways of isolating the cells that were bioluminescent and then grinding them up and doing the extraction on them. It’s not like ordering a bunch of mice and getting livers out and doing an experiment. It was all rather arduous. It’s quite remarkable that it was done at all. It was mostly biochemists doing it, and they were not getting a lot of support. In fact, as I remember it, Doug Prasher had some funding initially from the American Cancer Society, and when that dried up he could not get grants to pursue the work. I never applied for a grant to do the original GFP research. Granting agencies would have wanted to see preliminary data and the work was outside my main research program. GFP is really an example of something very useful coming from a far-outside-the-mainstream source. And because this was coming from a non-model-organism system, these jellyfish found off the west coast of the U.S., people were not jumping at the chance to go out and isolate RNAs and make cDNAs from them. So we’re not talking about a field that was highly populated. It was not something that was widely talked about. At the time, there was a lot of excitement about molecular biology, but this was biochemistry. The discovery really was somewhat orthogonal to the mainstream of biological research."
Here's an entire site dedicated to the GFP story, full of illustrations and details. That interview with Chalfie is here, with some background on his part in the discovery. Science background from the Nobel Foundation is here (PDF), for those who want even more).
+ TrackBacks (0) | Category: Biological News | Current Events
September 9, 2008
As I’ve noted here, and many others have elsewhere, we have very little idea how many important central nervous system drugs actually work. Antidepressants, antipsychotics, antiseizure medications for epilepsy – the real workings of these drugs are quite obscure. The standard explanation for this state of things is that the human brain is extremely complicated and difficult to study, and that’s absolutely right.
But there’s an interesting paper on antipsychotics that’s just come out from a group at Duke, suggesting that there’s an important common mechanism that has been missed up until now. One thing that everyone can agree on is that dopamine receptors are important in this area. Which ones, and how they should be affected (agonist, antagonist, inverse partial what-have-you) – now that’s a subject for argument, but I don’t think you’ll find anyone who says that the dopaminergic system isn’t a big factor. Helping to keep the argument going is the fact that the existing drugs have a rather wide spectrum of activity against the main dopamine receptors.
But for some years now, the D2 subtype has been considered first among equals in this area. Binding affinity to D2 correlates as well as anything does to clinical efficacy, but when you look closer, the various drugs have different profiles as inverse agonists and antagonists of the receptor. What this latest study shows, though, is that a completely different signaling pathway – other than the classic GPCR signaling one – might well be involved. A protein called beta-arrestin has long been known to be important in receptor trafficking – movement of the receptor protein to and from the cell surface. A few years ago, it was shown that beta-arrestin isn’t just some sort of cellular tugboat in these systems, but can participate in another signaling pathway entirely.
Dopamine receptors were already complicated when I worked on them, but they’ve gotten a lot hairier since then. The beta-arrestin work makes things even trickier: who would have thought that these GPCRs, with all of their well-established and subtle signaling modes, also participated in a totally different signaling network at the same time? It’s like finding out that all your hammers can also drive screws, using some gizmo hidden in their handles that you didn’t even know was there.
When this latest team looked at the various clinical antipsychotics, what they found was that no matter what their profile in the traditional D2 signaling assays, they all are very good at disrupting the D2/beta-arrestin pathway. Since some of the downstream targets in that pathway (a protein called Akt and a kinase, GSK-3) have already been associated with schizophrenia, this may well be a big factor behind antipsychotic efficacy, and one that no one in the drug discovery business has paid much attention to. As soon as someone gets this formatted for a high-throughput assay, though, that will change – and it could lead to entirely new compound classes in this area.
Of course, there’s still a lot that we don’t know. What, for example, does beta-arrestin signaling actually do in schizophrenia? Akt and GSK-3 are powerful signaling players, involved in all sorts of pathways. Untangling their roles, or the roles of other yet-unknown beta-arrestin driven processes, will keep the biologists busy for a good long while. And the existing antipsychotics hit quite a few other receptors as well – what’s the role of the beta-arrestin system in those interactions? The brain will keep us busy for a good long while, and so will the signaling receptors.
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
August 26, 2008
As all organic chemists who follow the literature know, over the last few years there’s been a strong swell of papers using Barry Sharpless’s “click chemistry” triazole-forming reactions. These reaction let you form five-membered triazole rings from two not-very-reactive partners, an azide and an acetylene, and people have been putting them to all kinds of uses, from the trivial to the very interesting indeed.
In the former category are papers that boil down to “We made triazoles from some acetylenes and azides that no one else has gotten around to using yet, and here they are, for some reason”. There are fewer of those publications than there were a couple of years ago, but they’re still out there. For its part, the latter (interesting) category is really all over the place, from in vivo biological applications to nanotechnology and materials science.
One recent paper in Organic Letters which was called to my attention starts off looking as if it’s going to be another bit of flotsam from the first group, but by the end it’s a very different thing indeed. The authors (from the Isobe group at Tohoku University in Japan, with collaborators from Tokyo) have made an analog of thymine, the T in the genetic code, where the 2-deoxyribose part has both an azide and an acetylene built onto it.
So far, so good, and at one point you probably could have gotten a paper out of things right there – let ‘em rip to make a few poly-triazole things and send off the manuscript. But this is a more complete piece of work. For one thing, they’ve made sure that their acetylenes can have removable silyl groups on them. That lets you turn their click reactivity on and off, since the copper-catalyzed reaction needs a free alkyne out there. So starting from a resin-supported sugar, they did one triazole click reaction after another in a controlled fashion – it took some messing around with the conditions, but they worked it out pretty smoothly.
And since the acetylene was at the 5 position of the sugar, and the azide was at the 3, they built a sort of poly-T oligonucleotide – but one that’s linked together by triazoles where instead of the phosphate groups found in DNA. People have, of course, made all sorts of DNA analogs, with all sorts of replacements for the phosphates, but they vary in how well they mimic the real thing. Startlingly, when they took a 10-mer of their “TL-DNA” (triazole-linked) and exposed it to a complementary 10-residue strand of good ol' poly-A DNA, the two zipped right up. In fact, the resulting helix seems to be significantly stronger than native DNA, as measured by a large increase in melting point. (That's their molecular model of the complex below left).
Well, after reading this paper, my first thought was that it might eventually make me eat some of my other words. Because just last week I was saying things about the prospects for nucleic acid therapies (RNAi, antisense) - mean, horrible, nasty things, according to a few of the comments that piled up, about how these might be rather hard to implement. But when I saw the end of this paper, the first thing that popped into my head was "stable high-affinity antisense DNA backbone. Holy cow". I assume that this also crossed the minds of the authors, and of some of the paper's other readers. Given the potential of the field, I would also assume that eventually we'll see that idea put to a test. It's a long way from being something that works, but it sure looks like a good thing to take a look at, doesn't it?
+ TrackBacks (0) | Category: Biological News
July 16, 2008
At various points in my drug discovery career, I’ve worked on G-protein-coupled receptor (GPCR) targets. Most everyone in the drug industry has at some point – a significant fraction of the known drugs work through them, even though we have a heck of a time knowing what their structures are like.
For those outside the field, GPCRs are a ubiquitous mode of signaling between the interior of a cell and what’s going on outside it, which accounts for the hundreds of different types of the things. They’re all large proteins that sit in the cell membrane, looped around so that some of their surfaces are on the outside and some poke through to the inside. The outside folds have a defined binding site for some particular ligand - a small molecule or protein – and the inside surfaces interact with a variety of other signaling proteins, first among them being the G-proteins of the name. When a receptor’s ligand binds from the outside, that sets off some sort of big shape change. The protein’s coils slide and shift around in response, which changes its exposed surfaces and binding patterns on the inside face. Suddenly different proteins are bound and released there, which sets off the various chemical signaling cascades inside the cell.
The reason we like GPCRs is that many of them have binding sites for small molecules, like the neurotransmitters. Dopamine, serotonin, acetylcholine – these are molecules that medicinal chemists can really get their hands around. The receptors that bind whole other proteins as external ligands are definitely a tougher bunch to work with, but we’ve still found many small molecules that will interact with some of them.
Naturally, there are at least two modes of signaling a GPCR can engage in: on and off. A ligand that comes in and sets off the intracellular signaling is called an agonist, and one that binds but doesn’t set off those signals is called an antagonist. Antagonist molecules will also gum up the works and block agonists from doing their things. We have an easier time making those, naturally, since there are dozens of ways to mess up a process compared to the ways there are of running it correctly!
Now, when I was first working in the GPCR field almost twenty years ago, it was reasonably straightforward. You had your agonists and you had your antagonists – well, OK, there were those irritating partial agonists, true. Those things set off the desired cellular signal, but never at the levels that a full agonist would, for some reason. And there were a lot of odd behaviors that no one quite knew how to explain, but we tried to not let those bother us.
These days, it’s become clear that GPCRs are not so simple. There appear to be some, for example, whose default setting is “on”, with no agonist needed. People are still arguing about how many receptors do this in the wild, but there seems little doubt that it does go on. These constituitively active receptors can be turned off, though, by the binding of some ligands, which are known as inverse agonists, and there are others, good old antagonists, that can block the action of the inverse agonists. Figuring out which receptors do this sort of thing - and which drugs - is a full time job for a lot of people.
It’s also been appreciated in recent years that GPCRs don’t just float around by themselves on the cell surface. Many of them interact with other nearby receptors, binding side-by-side with them, and their activities can vary depending on the environment they’re in. The search is on for compounds that will recognize receptor dimers over the good ol’ monomeric forms, and the search is also on for figuring out what those will do once we have them. To add to the fun, these various dimers can be with other receptors of their own kind (homodimers) or with totally different ones, some from different families entirely (heterodimers). This area of research is definitely heating up.
And recently, I came across a paper which looked at how a standard GPCR can respond differently to an agonist depending on where it's located in the membrane. We're starting to understand how heterogeneous the lipids in that membrane are, and that receptors can move from one domain to another depending on what's binding to them (either on their outside or inside faces). The techniques to study this kind of thing are not trivial, to put it mildly, and we're only just getting started on figuring out what's going on out there in the real world in real time. Doubtless many bizarre surprises await.
So, once again, the "nothing is simple" rule prevails. This kind of thing is why I can't completely succumb to the gloom that sometimes spreads over the industry. There's just so much that we don't know, and so much to work on, and so many people that need what we're trying to discover, that I can't believe that the whole enterprise is in as much trouble as (sometimes) it seems. . .
+ TrackBacks (0) | Category: Biological News | Drug Assays
May 22, 2008
Benjamin Cravatt at Scripps has another interesting paper out this week – by my standards, he hasn’t published very many dull ones. I spoke about some earlier work of his here, where his group tried to profile enzymes in living cells and found that the results they got were much different than the ones seen in their model systems.
This latest paper is in the same vein, but addresses some more general questions. One of his group members (Eranthi Weerapana, who certainly seems to have put in some lab time) started by synthesizing five simple test compounds. Each of them had a reactive group on them, and each molecule had an acetylene on the far end. The idea was to see what sorts of proteins combined with the reactive head group. After labeling, a click-type triazole reaction stuck a fluorescent tag on via the acetylene group, allowing the labeled proteins to be detected.
All this is similar to the previous paper I blogged about, but in this case they were interested in profiling these varying head groups: a benzenesulfonate, an alpha-chloroamide, a terminal enone, and two epoxides – one terminal on a linear chain, and the other a spiro off a cyclohexane. All these have the potential to react with various nucleophilic groups on a protein – cysteines, lysines, histidines, and so on. Which reactive groups would react with which sorts of protein residues, and on which parts of the proteins, was unknown.
There have been only a few general studies of this sort. The most closely related work is from Daniel Liebler at Vanderbilt, who's looking at this issue from a toxicology perspective ( try here , here, and here). And an earlier look at different reactive groups from the Sames lab at Columbia is here, but that was much less extensive.
Cravatt's study reacted these probes first with a soluble protein mix from mouse liver – containing who knows how many different proteins – and followed that up with similar experiments with protein brews from heart and kidney, along with the insoluble membrane fraction from the liver. A brutally efficient proteolysis/mass spectroscopy technique, described by Cravatt in 2005, was used to simultaneously identify the labeled proteins and the sites at which they reacted. This is clearly the sort of experiment that would have been unthinkable not that many years ago, and it still gives me a turn to see only Cravatt, Weerapana, and a third co-author (Gabriel Simon) on this one instead of some lab-coated army.
Hundreds of proteins were found to react, as you might expect from such simple coupling partners. But this wasn’t just a blunderbuss scatter; some very interesting patterns showed up. For one thing, the two epoxides hardly reacted with anything, which is quite interesting considering that functional group’s reputation. I don’t think I’ve ever met a toxicologist who wouldn’t reject an epoxide-containing drug candidate outright, but these groups are clearly not as red-hot as they’re billed. The epoxide compounds were so unreactive, in fact, that they didn’t even make the cut after the initial mouse liver experiment. (Since Cravatt’s group has already shown that more elaborate and tighter-binding spiro-epoxides can react with an active-site lysine, I’m willing to bet that they were surprised by this result, too).
The next trend to emerge was that the chloroamide and the enone, while they labeled all sorts of proteins, almost invariably did so on their cysteine (SH) residues. Again, I think if you took a survey of organic chemists or enzymologists, you’d have found cysteines at the top of the expected list, but plenty of other things would have been predicted to react as well. The selectivity is quite striking. What’s even more interesting, and as yet unexplained, is that over half the cysteine residues that were hit only reacted with one of the two reagents, not the other. (Leibler has seen similar effects in his work).
Meanwhile, the sulfonate went for several different sorts of amino acid residues – it liked glutamates especially, but also aspartate, cysteine, tyrosine, and some histidines. One of the things I found striking about these results is how few lysines got in on the act with any of the electrophiles. Cravatt's finely tuned epoxide/lysine interaction that I linked to above turns out, apparently, to be a rather rare bird. I’ve always had lysine in my mind as a potentially reactive group, but I can see that I’m going to have adjust my thinking.
Another trend that I found thought-provoking was that the labeled residues were disproportionately taken from the list of important ones, amino acids that are involved in the various active sites or in regulatory domains. The former may be intrinsically more reactive, in an environment that has been selected to increase their nucleophilicity. And as for the latter, I’d think that’s because they’re well exposed on the surfaces of the proteins, for one thing, although they may also be juiced up in reactivity compared to their run-of-the-mill counterparts.
Finally, there’s another result that reminded me of the model-system problems in Cravatt’s last paper. When they took these probes and reacted them with mixtures of amino acid derivatives in solution, the results were very different than what they saw in real protein samples. The chloroamide looked roughly the same, attacking mostly cysteines. But the sulfonate, for some reason, looked just like it, completely losing its real-world preference for carboxylate side chains. Meanwhile, the enone went after cysteine, lysine, and histidine in the model system, but largely ignored the last two in the real world. The reasons for these differences are, to say the least, unclear – but what’s clear, from this paper and the previous ones, is that there is (once again!) no substitute for the real world in chemical biology. (In fact, in that last paper, even cell lysates weren’t real enough. This one has a bit of whole-cell data, which looks similar to the lysate stuff this time, but I’d be interested to know if more experiments were done on living systems, and how close they were to the other data sets).
So there are a lot of lessons here - at least, if you really get into this chemical biology stuff, and I obviously do. But even if you don't, remember that last one: run the real system if you're doing anything complicated. And if you're in drug discovery, brother, you're doing something complicated.
+ TrackBacks (0) | Category: Biological News | Toxicology
May 19, 2008
OK, drugs generally bind to some sort of cavity in a protein. So what’s in that cavity when the drug isn’t there? Well, sometimes it’s the substance that the drug is trying to mimic or block, the body’s own ligand doing what it’s supposed to be doing. But what about when that isn’t occupying the space – what is?
A moment’s thought, and most chemists and biologists will say “water”. That’s mostly true, although it can give a false impression. When you get X-ray crystal structures of enzymes, there’s always water hanging around the protein. But at this scale, any thoughts of bulk water as we know it are extremely misleading. Those are individual water molecules down there, a very different thing.
There seem to be several different sorts of them, for one thing. Some of those waters are essential to the structure of the protein itself – they form hydrogen bonds between key residues of its backbone, and you mess with them at your peril. Others are adventitious, showing up in your X-ray structure in the same way that pedestrians show up in a snapshot of a building’s lobby. (That’s a good metaphor, if I do say so myself, but to work that first set of water molecules into it, you’d have to imagine people stuck against the walls with their arms spread, helping to hold up the building).
And in between those two categories are waters that can interact with both the protein and your drug candidate. They can form bridges between them, or they can be kicked out so that your drug interacts directly. Which is better? Unfortunately, it’s hard to generalize. There are potent compounds that sit in a web of water molecules, and there are others that cozy right up to the protein at every turn.
But there's one oddity that just came out in the literature. This one's weird enough to deserve its own paper: the protein beta-lactoglobulin appears to have a large binding site that's completely empty of water molecules. It's a site for large lipids to bind, so it makes sense that it would be a greasy environment that wouldn't be friendly to a lot of water, but completely empty? That's a first, as far as I know. When you think about it, that's quite weird: inside that protein is a small zone that's a harder vacuum than anything even seen in the lab: there's nothing there at all. It's a small bit of interstellar space, sitting inside a protein from cow blood. Nature abhors a vacuum, but apparently not this one.
+ TrackBacks (0) | Category: Biological News
May 16, 2008
A good rule to follow: hold onto your wallet when two exciting, complicated fields of research are combined. Nature reported earlier this spring on a good example of this, the announcement by a small biotech called Primegen that they'd used carbon nanotubes to reprogram stem cells. (Here's a good article from VentureBeat on the same announcement, and there's an excellent piece on the announcement and the company in Forbes).
Stem cells and nanostructures are two undeniably hot areas of research. And also undeniable is that fact that they're both in their very early days - the amount of important information we don't know about both of these topics must be really impressive, which is why so many people are beavering away at them. So what are the odds of getting them to work together? Not as good as the odds that someone thought the combination would make a good press release, I'm afraid.
The PrimeGen web site, though a bit better than that VentureBeat article describes it, still has some odd notes to it. I particularly like this phrase: "PrimeGen’s broad intellectual property portfolio is founded on groundbreaking platform technologies invented by our team of dedicated and visionary scientists." Yep, we talk that way all the time in this business. You also have to raise an eyebrow at this part: "Disease and injury applications of PrimeCell™ include Alzheimer’s Disease, Cardiac Disease, Diabetes, Lupus, Multiple Sclerosis, Leukemia, Muscular Dystrophy, Parkinson’s Disease, Rheumatoid Arthritis, Spinal Cord Injury, Autoimmune Disease, Stroke, Skin Regeneration and Wound Healing." It'll mow your yard, too, if you're willing to participate in the next funding round.
The next sentence is the key one: "The extent to which stem cells can be used to treat injury and illness has yet to be fully evaluated. . ." You can say that again! In fact, I wouldn't mind seeing that in 36-point bold across the top of every stem cell company web page and press release. But what are the chances of that? As good as the chance that nanotechnology will suddenly going provide us a way to make the stem cells do what we want, I'm afraid. . .
+ TrackBacks (0) | Category: Biological News | Press Coverage
March 28, 2008
It’s been a while since I talked about RNA interference here. It’s still one of those tremendously promising therapeutic ideas, and it’s still having a tremendously hard time proving itself. Small RNA molecules can do all sorts of interesting and surprising things inside cells, but the trick is getting them there. Living systems are not inclined to let a lot of little nucleic acid sequences run around unmolested through the bloodstream.
The RNA folks can at least build on the experience (long, difficult, expensive) of the antisense DNA people, who have been trying to dose their compounds for years now and have tried out all sorts of ingenious schemes. But even if all these micro-RNAs could be dosed, would we still know what they’re going to do?
A report in the latest Nature suggests that the answer is “not at all”. This large multi-university group was looking at macular degeneration, a natural target for this sort of technology. It’s a serious disease, and it occurs in a privileged compartment of the body, the inside of the eye. You can inject your new therapy directly in there, for example (I know, it gives me the shivers, too, but it sure beats going blind). That bypasses the gut, the liver, and the bloodstream, and that humoral fluid of the eye is comparatively free of hostile enzymes. (It’s no coincidence that the antisense and aptamer people have gone after this and other eye diseases as well).
Angiogenesis is a common molecular target for macular regeneration, since uncontrolled formation of new capillaries is a proximate cause of blindness in such conditions. (That target has the added benefit of giving your therapy a possible entry into the oncology world, should you figure out how to get it to work well here). VEGF is the prototype angiogenesis target, so you’d figure that RNA interference targeting VEGF production or signaling would work as well as anything could, as a first guess.
And so it does, as this team found out. But here comes the surprise: when the researchers checked their control group, using a similar RNA that should have been ineffective, they found that it was working just fine, too – just as well as the VEGF-targeted ones, actually. Baffled, they went on to try a host of other RNAs. Reading the paper, you can just see the disbelief mounting as they tried various sequences against other angiogenic targets (success!), nonangiogenic proteins (success!?), proangiogenic ones that should make the disease worse (success??), genes for proteins that aren’t even expressed in the eye (success!), sequences against RNAs from plants and microbes that don’t even exist in humans at all (oh God, success again), totally random RNAs (success, damnit), and RNAs that shouldn’t be able to silence anything because they’ve got completely the wrong sort of sequence (oh the hell with it, success). Some of these even worked when injected i.p., into the gut cavity, instead of into the eye at all, suggesting that this was a general mechanism that had nothing to do with the retina.
As it turns out, these things are acting through hitting a cell surface receptor, TLR3. And all you need, apparently, is a stretch of RNA that’s at least 21 units long. Doesn’t seem to matter much what the sequence is – thus all that darn success with whatever they tried. Downstream of TLR3 come induction of gamma-interferon and IL-12, and those are what are doing the job of shutting down angiogenesis. (Off-target effects involving these have been noted before with siRNA, but now I think we’re finally figuring out why).
What does this all mean? Good news and bad news. The companies that are already dosing RNAi therapies for macular degeneration have just discovered that there's an awful lot that they don't know about what they're doing, for one thing. On the flip side, there are a lot of human cell types with TLR3 receptors on them, and a lot of angiogenic disorders that could potentially be treated, at least partially, by targeting them in this manner. That’s some good news. The bad news is that most of these receptors are present in more demanding environments than the inside of the eye, so the whole problem of turning siRNAs into drugs still looms large.
And the other bad news is that if you do figure out a way to dose these things, you may well set off TLR3 effects whether you want them or not. Immune system effects on the vasculature are not the answer to everything, but that may be one of the answers you always get. And this sort of thing makes you wonder what other surprising things systemic RNA therapies might set off. We will, in due course, no doubt find out. More here from John Timmer at Nobel Intent, who correctly tags this as a perfect example of why you want to run a lot of good control experiments. . .
+ TrackBacks (0) | Category: Biological News | Drug Development
February 14, 2008
I’ve been reading an interesting paper from JACS with the catchy title of “Optimization of Activity-Based Probes for Proteomic Profiling of Histone Deacetylase Complexes”. This is work from Benjamin Cravatt's lab at Scripps, and it says something about me, I suppose, that I found that title of such interest that I immediately printed off a copy to study more closely. Now I’ll see if I can interest anyone who wasn’t already intruiged! First off, some discussion of protein tagging, so if you’re into that stuff already, you may want to skip ahead.
So, let’s say you have a molecule that has some interesting biological effect, but you’re not sure how it works. You have suspicions that it’s binding to some protein and altering its effects (always a good guess), but which protein? Protein folks love fluorescent assays, so if you could hang some fluorescent molecule off one end of yours, perhaps you could start the hunt: expose your cells to the tagged molecule, break them open, look for the proteins that glow. There are complications, though. You’d have to staple the fluorescent part on in a way that didn’t totally mess up that biological activity you care about, which isn’t always easy (or even possible). The fact that most of the good fluorescent tags are rather large and ugly doesn’t help. But there’s more trouble: even if you manage to do that, what’s to keep your molecule from drifting right back off of the protein while you’re cleaning things up for a look at the system? Odds are it will, unless it has a really amazing binding constant, and that’s not the way to bet.
One way around that problem is sticking yet another appendage on to the molecule, a so-called photoaffinity label. These groups turn into highly reactive species on exposure to particular wavelengths of light, ready to form a bond with the first thing they see. If your molecule is carrying one when it’s bound to your mystery protein, shining light on the system will likely cause a permanent bond to form between the two. Then you can do all your purifications and separations, and look at your leisure for which proteins fluoresce.
This is “activity-based protein profiling”, and it’s a hot field. There are a lot of different photoaffinity labels, and a lot of ways to attach them, and likewise with the fluorescent groups. The big problem, as mentioned above, is that it’s very hard to get both of those on your molecule of interest and still keep its biological activity – that’s an awful lot of tinsel to carry around. One slick solution is to use a small placeholder for the big fluorescent part. This, ideally, would be some little group that will hide out innocently during the whole protein-binding and photoaffinity-labeling steps, then react with a suitably decorated fluorescent partner once everything’s in place. This assembles your glowing tag after the fact.
A favorite way to do that step is through an azide-acetylene cycloaddition reaction, the favorite of Barry Sharpless’s “click” reactions. Acetylenes are small and relatively unreactive, and at the end of the process, after you’ve lysed the cells and released all their proteins, you can flood your system with azide-substituted fluorescent reagent. The two groups react irreversibly under mild catalytic conditions to make a triazole ring linker, which is a nearly ideal solution that’s getting a lot of use these days (more on this another day).
So, now to this paper. What this group did was label a known compound (from Ron Breslow's group at Columbia) that targets histone deacetylase (HDAC) enzymes, SAHA, now on the market as Vorinostat. There are a lot of different subtypes of HDAC, and they do a lot of important but obscure things that haven’t been worked out yet. It’s a good field to discover protein function in.
When they modified SAHA in just the way described above, with an acetylene and a photoaffinity group, it maintained its activity on the known enzymes, so things looked good. They then exposed it to cell lysate, the whole protein soup, and found that while it did label HDAC enzymes, it seemed to label a lot of other things in the background. That kind of nonspecific activity can kill an assay, but they tried the label out on living cells anyway, just to see what would happen.
Very much to their surprise, that experiment led to much cleaner and more specific labeling of HDACs. The living system was much nicer than the surrogate, which (believe me) is not how things generally go. Some HDACs were labeled much more than others, though, and my first thought on reading that was “Well, yeah, sure, your molecule is a more potent binder to some of them”.
But that wasn’t the case, either. When they profiled their probe molecule’s activity versus a panel of HDAC enzymes, they did indeed find different levels of binding – but those didn’t match up with which ones were labeled more in the cells. (One explanation might be that the photoaffinity label found some of the proteins easier to react with than others, perhaps due to what was nearby in each case when the reactive species formed).
Their next step was to make a series of modified SAHA scaffolds and rig them up with the whole probe apparatus. Exposing these to cell lysate showed that many of them performed fine, labeling HDAC subtypes as they should, and with different selectivities than the original. But when they put these into cells, none of them worked as well as the plain SAHA probe – again, rather to their surprise. (A lot of work went into making and profiling those variations, so I suspect that this wasn’t exactly the result the team had hoped for - my sympathies to Cravatt and especially to his co-author Cleo Salisbury). The paper sums the situation up dryly: "These results demonstrate that in vitro labeling is not necessarily predictive of in situ labeling for activity-based protein profiling probes".
And that matches up perfectly with my own prejudices, so it must be right. I've come to think, over the years, that the way to go is to run your ideas against the most complex system you think that they can stand up to - in fact, maybe one step beyond that, because you may have underestimated them. A strict reductionist might have stopped after the cell lysate experiments in this case - clearly, this probe was too nonspecific, no need to waste time on the real system, eh? But the real system, the living cell, is real in complex ways that we don't understand well at all, and that makes this inference invalid.
The same goes for medicinal chemistry and drug development. If you say "in vitro", I say "whole cells". If you've got it working in cells, I'll call for mice. Then I'll see your mice and raise you some dogs. Get your compounds as close to reality as you can before you pass judgment on them.
+ TrackBacks (0) | Category: Biological News | Drug Assays | Drug Development
January 8, 2008
I came across a neat article in Nature from a group working on a new technique in neuroscience imaging. They expressed an array of four differently colored fluorescent proteins in developing neurons in vivo, and placed them so that recombination events would scramble the relative expression of the multiple transgenes as the cell population expands. That leads to what they’re calling a “brainbow”: a striking array of about a hundred different shades of fluorescent neurons, tangled into what looks like a close-up of a Seurat painting.
The good part is that the entire neuron fluoresces, not just a particular structure inside it. Being able to see all those axons opens up the possibility of tracking how the cells interact in the developing brain – where synapses form and when. That should keep everyone in this research group occupied for a good long while.
What I particularly enjoyed, though, was the attitude of the lab head, Jeff Lichtman of Harvard. He states that he doesn’t really know exactly what they’re looking for, but that this technique will allow them to just sit back and see what there is to see. That’s a scientific mode with a long history, basically good old Francis-Bacon style induction, but we don’t actually get a chance to do it as much as you’d think.
That varies by the area being under investigation. In general, the more complex and poorly understood the object of study, the more appropriate it is to sit back and take notes, rather than go in trying to prove some particular hypothesis. (Neuroscience, then, is a natural!) In a chemistry setting, though, I wouldn’t recommend setting up five thousand sulfonamide formations just to see what happens, because we already have a pretty good idea of what’ll happen. But if you’re working on new metal-catalyzed reactions, a big screen of every variety of metal complex you can find might not be such a bad idea, if you’ve got the time and material. There’s a lot that we don’t know about those things, and you could come across an interesting lead.
Some people get uncomfortable with “fishing expedition” work like this, though. In the med-chem labs, I’ve seen some fishy glances directed at people who just made a bunch of compounds in a series because no one else had made them and they just wanted to see what would happen. While I agree that you don’t want to run a whole project like that, I think that the suspicion is often misplaced, considering how many projects start from high-throughput screening. We don’t, a priori, usually have any good idea of what molecules should bind to a new drug target. Going in with an advanced hypothesis-driven approach often isn’t as productive as just saying “OK, let’s run everything we’ve got past the thing, see what sticks, and take it from there”.
But the feeling seems to be that a drug project (and its team members) should somehow outgrow the random approach as more knowledge comes in. Ideally, that would be the case. I’m not convinced, though, that enough med-chem projects generate enough detailed knowledge about what will work and what won’t to be able to do that. (There’s no percentage in beating against structural trends that you have evidence for, but trying out things that no one’s tried yet is another story). It’s true that a project has to narrow down in order to deliver a lead compound to the clinic, but getting to the narrowing-down stage doesn’t have to be (and usually isn’t) a very orderly process.
+ TrackBacks (0) | Category: Biological News | Drug Development | The Central Nervous System | Who Discovers and Why
December 5, 2007
How many hits can a drug – or a whole class of drugs – take? Avandia (rosiglitazone) has been the subject of much wrangling about cardiovascular risk in its patient population of Type II diabetics. But there have also been scattered reports of increases in fractures among people taking it or Actos (pioglitazone), the other drug with the same mechanism of action.
Now Ron Evans and his co-workers at Salk, who know about as much PPAR-gamma biology as there is to know, have completed a difficult series of experiments that provides some worrying data about what might be going on. Studying PPAR-gamma’s function in mice is tricky, since you can’t just step in and knock it out (that’s embryonic lethal), and its function varies depending on the tissue where it’s expressed. (That latter effect is seen across many other nuclear receptors, which is just one of the things that make their biology so nightmarishly complex).
So tissue-specific knockouts are the way to go, but the bones are an interesting organ. The body is constantly laying down new bone tissue and reabsorbing the old. Evans and his team managed to knock out the system in osteoclasts (the bone-destroying cells), but not osteoblasts (the bone-forming ones). It’s been known for years that PPAR-gamma has effects on the development of the latter cells, which makes sense, because it also affects adipocytes (fat cells), and those two come from the same lineage. But no one’s been able to get a handle on what it does in osteoclasts, until now.
It turns out that without PPAR-gamma, the bones of the mice turned out larger and much more dense than in wild-type mice. (That’s called osteopetrosis, a word that you don’t hear very much compared to its opposite). Examining the tissue confirmed that there seemed to be normal numbers of osteoblasts, but far fewer osteoclasts to reabsorb the bone that was being produced. Does PPAR stimulation do the opposite? Unfortunately, yes – there had already been concern about possible effects on bone formation because of the known effects on osteoblasts, but it turned out that dosing rosiglitazone in mice actually stimulates their osteoclasts. This double mode of action, which was unexpected, speeds up the destruction of bone and at the same time slow down its formation. Not a good combination.
So there’s a real possibility that long-term PPAR-gamma agonist use might lead to osteoporosis in humans. If this is confirmed by studies of human osteoclast activity, that may be it for the glitazones. They seem to have real benefit in the treatment of diabetes, but not with these consequences. Suspicion of cardiovascular trouble, evidence of osteoporosis – diabetic patients have enough problems already.
As I’ve mentioned here before, I think that PPAR biology is a clear example of something that has turned out to be (thus far) too complex for us to deal with. (Want a taste? Try this on for size, and let me assure that this is a painfully oversimplified diagram). We don’t understand enough of the biology to know what to target, how to target it, and what else might happen when we do. And we've just proven that again. I spent several years working in this field, and I have to say, I feel safer watching it from a distance.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Toxicology
November 11, 2007
As you root through genomic sequences - and there are more and more of them to root through these days - you come across some stretches of DNA that hardly seem to vary at all. The hard-core "ultraconserved" parts, first identified in 2004, are absolutely identical between mice, rats, and humans. Our last common ancestor was rather a long time ago (I know, I know - everyone works with some people who seem to be exceptions, but bear with me), so these things are rather well-preserved.
Even important enzyme sequences vary a bit among the three species, so what could these pristine stretches (some of which are hundreds of base pairs long) be used for? The assumption, naturally, has been that whatever it is, it must be mighty important, but if we're going to be scientists, we can't just go around assuming that what we think must be right. A team at Lawrence Berkeley and the DOE put things to the test recently by identifying four of the ultraconserved elements that all seem to be located next to critical genes - and deleting them.
The knockout mice turned out to do something very surprising indeed. They were born normally, but then they grew up normally. When they reached adulthood, though, they were completely normal. Exhaustive biochemical and behavioral tests finally uncovered the truth: they're basically indistinguishable from the wild type. Hey, I told you it was surprising. This must have been the last thing that the researchers expected.
Reaction to these results has been a series of raised eyebrows and furrowed foreheads. Deleting any of the known genes near the ultraconserved sequences confirms that they, anyway, are as important as they're billed to be. And these genes show the usual level of difference that you see among the three species. So what's this unchanged, untouchable, but apparently disposable stuff in there with them?
No one knows. And it's a real puzzle, the answer to which is going to be tangled up with a lot of our basic ideas about genes and evolution. To a good first approximation, it's hard to see how (or why) something like this should be going on. So what, exactly, are we missing? Something important? And if so, what else have we missed, too?
+ TrackBacks (0) | Category: Biological News
October 29, 2007
There was an intriguing paper published earlier this month from Manfred Reetz and co-workers at the Max Planck Institute. It's not only an interesting finding, but a good example of making lemonade from lemons.
They were looking at an enzyme called tHisF, a thermostable beast from a marine microorganism that's normally involved in histamine synthesis. It has an acid/base catalytic site, so Reetz's group, which has long been involved in pushing enzymes to do more than they usually do, was interested in seeing if this one would act as an esterase/hydrolase.
And so it did - not as efficiently as a real esterase, but not too shabby when given some generic nitrophenyl esters to chew on. There was some structure-activity trend at work: the larger the alkyl portion of the ester, the less the enzyme liked it. Given a racemic starting material, it did a good job of resolution, spitting out the R alcohol well over the S isomer. All just the sort of thing you'd expect from a normal enzyme.
Next, they used the crystal structure of the protein and previous work on the active site to see which amino acids were important for the esterase activity. And here's where the wheels came off. They did a series of amputations to all the active side chains, hacking aspartic acids and cysteines down to plain old alanine. And none of it did a thing. To what was no doubt a room full of shocked expressions, the enzyme kept rolling along exactly as before, even with what were supposed to be its key parts missing.
Further experiments confirmed that the active site actually seems to have nothing at all to do with the hydrolase activity. So what's doing it? They're not sure, but there must be some other non-obvious site that's capable of acting like a completely different enzyme. I'm sure that they're actively searching for it now, probably by doing a list of likely point mutations until they finally hit something that stops the thing.
So how often does this sort of thing happen? Are there other enzymes with "active sites" that no one's ever recognized? If so, do these have any physiological relevance? No one knows yet, but a whole new area of enzymology may have been opened up. I look forward to seeing more publications on this, and I'll enjoy them all the more knowing that they came from a series of frustrating, head-scratching "failed" experiments. Instead of pouring things into the waste can, Reetz and his co-workers stayed the course, and my hat's off to them.
+ TrackBacks (0) | Category: Biological News
October 15, 2007
The news of a possible diagnostic test for Alzheimer’s disease is very interesting, although there’s always room to wonder about the utility of a diagnosis of a disease for which there is little effective therapy. The sample size for this study is smaller than I’d like to see, but the protein markers that they’re finding seem pretty plausible, and I’m sure that many of them will turn out to have some association with the disease.
But let’s run some numbers. The test was 91% accurate when run on stored blood samples of people who were later checked for development of Alzheimer’s, which compared to the existing techniques is pretty good. Is it good enough for a diagnostic test, though? We’ll concentrate on the younger elderly, who would be most in the market for this test.The NIH estimates that about 5% of people from 65 to 74 have AD. According to the Census Bureau (pdf), we had 17.3 million people between those ages in 2000, and that’s expected to grow to almost 38 million in 2030. Let’s call it 20 million as a nice round number.
What if all 20 million had been tested with this new method? We’ll break that down into the two groups – the 1 million who are really going to get the disease and the 19 million who aren’t. When that latter group gets their results back, 17,290,000 people are going to be told, correctly, that they don’t seem to be on track to get Alzheimer’s. Unfortunately, because of that 91% accuracy rate, 1,710,000 people are going to be told, incorrectly, that they are. You can guess what this will do for their peace of mind. Note, also, that almost twice as many people have just been wrongly told that they’re getting Alzheimer’s than the total number of people who really will.
Meanwhile, the million people who really are in trouble are opening their envelopes, and 910,000 of them are getting the bad news. But 90,000 of them are being told, incorrectly, that they’re in good shape, and are in for a cruel time of it in the coming years.
The people who got the hard news are likely to want to know if that’s real or not, and many of them will take the test again just to be sure. But that’s not going to help; in fact, it’ll confuse things even more. If that whole cohort of 1.7 million people who were wrongly diagnosed as being at risk get re-tested, about 1.556 million of them will get a clean test this time. Now they have a dilemma – they’ve got one up and one down, and which one do you believe? Meanwhile, nearly 154,000 of them will get a second wrong diagnosis, and will be more sure than ever that they’re on the list for Alzheimer’s.
Meanwhile, if that list of 910,000 people who were correctly diagnosed as being at risk get re-tested, 828 thousand of them will hear the bad news again and will (correctly) assume that they’re in trouble. But we’ve just added to the mixed-diagnosis crowd, because almost 82,000 people will be incorrectly given a clean result and won’t know what to believe.
I’ll assume that the people who got the clean test the first time will not be motivated to check again. So after two rounds of testing, we have 17.3 million people who’ve been correctly given a clean ticket, and 828,000 who’ve been correctly been given the red flag. But we also have 154,000 people who aren’t going to get the disease but have been told twice that they will, 90,000 people who are going to get it but have been told that they aren’t, and over 1.6 million people who have been through a blender and don’t know anything more than when they started.
Sad but true: 91% is just not good enough for a diagnostic test. And getting back to that key point in the first paragraph, would 100% be enough for a disease that we can't do anything about? Wait for an effective therapy, is my advice, and for a better test.
Update: See the comments for more, because there's more to it than this. For one thing, are the false positive and false negative rates for this test the same? (That'll naturally make a big difference). And how about differential diagnosis, using other tests to rule out similar conditions? On the should-you-know question, what about the financial and estate planning implications of a positive test - shouldn't those be worth something? (And there's another topic that no one's brought up yet: suicide, which you'd have to think would be statistically noticeable. . .)
+ TrackBacks (0) | Category: Alzheimer's Disease | Biological News
October 11, 2007
Now we open the sedate, learned pages of Nature Methods, a fine journal that specializes in new techniques in molecular and chemical biology. In the August issue, the correspondence section features. . .well, a testy response to a paper that appeared last year in Nature Methods.
“Experimental challenge to a ‘rigorous’ BRET analysis of GPCR oligimerization” is the title. If you don’t know the acronyms, never mind – journals like this have acronyms like leopards have spots. The people doing the complaining, Ali Salahpour and Bernard Masri of Duke, are taking issue with a paper from Oxford by John James, Simon Davis, and co-workers. The original paper described a bioluminescence energy transfer (BRET) method to see if G-protein coupled receptors (GPCRs) were associating with each other on cell surfaces. (GPCRs are hugely important signaling systems and drug targets – think serotonin, dopamine, opiates, adrenaline – and it’s become clear in recent years that they can possibly hook up in various unsuspected combinations on the surfaces of cells in vivo).
Salahpour and Masri take strong exception to the Oxford paper’s self-characterization:
“Although the development of new approaches for BRET analysis is commendable, part of the authors’ methodological approach falls short of being ‘rigorous’. . .Some of the pitfalls of their type-1 and type-2 experiments have already been discussed elsewhere (footnote to another complaint about the same work, which also appeared earlier this year in the same journal - DBL). Here we focus on the type-2 experiments and report experimental data to refute some of the results and conclusions presented by James et al.”
That’s about an 8 out of 10 on the scale of nasty scientific language, translating as “You mean well but are lamentably incompetent.” The only way to ratchet things up further is to accuse someone of bad faith or fraud. I won’t go into the technical details of Salahpour and Masri’s complaints; they have to do with the mechanism of BRET, the effect on it of how much GPCR protein is expressed in the cells being studied, and the way James et al. interpreted their results versus standards. The language of these complaints, though, is openly exasperated, full of wording like “unfortunately”, “It seems unlikely”, “we can assume, at best” “(does) not permit rigorous conclusions to be drawn”, “might be erroneous”, “inappropriate and a misinterpretation”, “This could explain why”, “careful examination also (raises) some concerns”, and so on. After the bandilleros and picadors have done their work in the preceding paragraphs, the communication finishes up with another flash of the sword:
In summary, we agree with James and colleagues that type-2 experiments are useful and informative. . .Unfortunately, the experimental design proposed in James et al. to perform type-2 experiments seems incorrect and cannot be interpreted. . .”
James and Davis don’t take this with a smile, naturally. The journal gave them a space to reply to the criticisms, as is standard practice, and as they did for the earlier criticism. (At least the editors know that people are reading the papers they accept. . .) They take on many of the Salahpour/Masri points, claiming that their refutations were done under completely inappropriate conditions, among other things. And they finish up with a flourish, too:
"As we have emphasized, we were not the first to attempt quantitative analysis of BRET data. Previously, however, resonance energy transfer theory was misinterpreted (for example, ref. 4) or applied incorrectly (for example, ref. 5). (Note - reference 4 is to a paper by the first people to question their paper earlier this year, and reference 5 is to the work of Salahpour himself, a nice touch - DBL). The only truly novel aspect of our experiments is that we verified our particular implementation of the theory by analyzing a set of very well-characterized. . .control proteins. (Note - "as opposed to you people" - DBL). . . .In this context, the technical concerns of Salahpour and Masri do not seem relevant."
It's probably safe to say that the air has not yet been cleared. I'm not enough of a BRET hand to say who's right here, but it looks like we're all going to have some more chances to make up our minds (and to appreciate the invective along the way).
+ TrackBacks (0) | Category: Biological News | Drug Assays | The Scientific Literature
September 6, 2007
It’s useful to be reminded every so often of how much you don’t know. There’s a new paper in PNAS that’ll do that for a number of its readers. The authors report a new protein, one of the iron-sulfur binding ones. There are quite a few of these known already, so this wouldn’t be big news by itself. But this one is the first of its kind to be found in the outer mitochondrial membrane, which makes it a bit more interesting.
It also has a very odd structure – well, odd to us humans anyway, for all we know things like this are all over the place and we haven’t stumbled across one until now. There’s a protein fold here which not only has never been seen in the 650 or iron-sulfur proteins with solved structures, it’s never been seen in any protein at all. That’s worth a good publication, for sure.
The part that’ll really throw people, though, is that this protein (named mitoNEET, for the amino acids that make up its weird fold) binds a known drug whose target we all thought we already knew. Actos (pioglitazone) turns out to associate with it, which is a very interesting surprise. We already knew the glitazones as PPAR-gamma ligands. We didn’t understand them as PPAR ligands (no one understands them very well, despite many years and many, many scores of millions of dollars), but that was generally accepted as their site of action.
And now there’s another one, which is going to make the pioglitazone story even more complex. Reading between the lines of the paper, I get the strong impression that the authors were fishing for another pioglitazone binding site, using modified versions of the drug to label proteins, and hit the jackpot with this one. (And good for them - that's a hard technique to get to work). There’s been some speculation that the compound might have effects on mitochondria that wouldn’t necessarily be PPAR-mediated, and this is strong circumstantial evidence for it.
What’s more, I can’t think of any other iron-sulfur proteins that are targets of small molecules. Just last week, I was talking about the diversity of binding sites and interactions that we haven’t explored in medicinal chemistry, and here’s an example for you.
This paper raises a pile of questions: what does mitoNEET do? Shuttle iron-sulfur complexes around? (If so, to where, and to what purpose?) Is it involved in diabetes, or other diseases of metabolism? Does pioglitazone modify its activity in vivo, whatever that activity is? How well does it bind the drug, anyway, and what does the structure of that complex look like? Does Avandia (rosiglitazone) bind, too, and if not, why not? Are there other proteins in this family, and do they also have drug interactions that we don’t know about? Ah, we’ll all be employed forever in this business, for as long as people can stand it.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity
July 17, 2007
A commentor to my Proteomics 101 post the other day brought up an important point: that before you can have a chance to figure out what a protein is doing, you have to know that it exists. Finding the darn things is no small job, since you're digging through piles of chemically similar stuff to unearth them. What's more, we can't just ignore 'em: some of the low-concentration proteins are also correspondingly important and powerful.
Nasty arguments can erupt over whether a given protein and its proposed functions even exist. Crockery is flying over one of those right now, an insulin-like protein hormone dubbed "visfatin" by its discoverers in Osaka a couple of years ago. Well, in this case the protein probably exists, but does it do what it's advertised to do? An insulin mimic secreted by fat cells would be worth knowing about, but there doesn't seem to be enough of it present in the blood to do much of anything, given how well it binds to its putative targets. There are also reports that some of that data in the Osaka paper are hard to reproduce.
Complicating things even more is the (apparently well-founded) contention that visfatin is a re-discovery of a protein already known as PBEF, which is identical to another protein named Nampt. (Each "discovering" group assigned their own name, a situation that happens so often in biology that people don't even notice it any more).
The whipped topping on the whole thing is a accusation of misconduct by someone in Japan, which led to an investigation by Osaka University, which has now recommended that the original paper be retracted. Its lead author, Iichiro Shimomura, does not agree, as you might well imagine. The points of contention are many: whether the misconduct was real at all, or whether it describes real events that don't rise to the level of misconduct, or whether the conclusions of the paper are invalidated or not by them, and so on.
An early solution appears unlikely. And we still don't know what exactly visfatin/PBEF/Nampt is doing. Next time you wonder how things are going over in the proteome, consider this one.
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | The Dark Side
November 6, 2006
One of the things I like most about science is that you really don't know what's going to happen next. That's especially true in the areas where things have just barely settled down. Before that, when a field is new, no one knows what to expect, so in a way there aren't really any surprising results: everything's a surprise. A much more settled area, by contrast, is far less likely to produce surprises, although when one shows up it really stands out. But a field where people are just starting to exhale and think that maybe they've finally figured out what's going on - that has the best combination of high contrast and a real likelihood for craziness.
Here's a perfect example, since I was just expressing some doubts about the immediate commercial potentials of RNA interference the other day. In a paper coming out in PNAS, a group at UCSF was investigating the use of some small double-stranded RNAs, just the sort of thing that can be used for RNAi experiments. But they found (to their great surprise) that their experiments were stimulating the transcription of their targeted genes, rather than shutting them down. Needless to say, this was not what anyone expected, and I'll bet the folks involved repeated these things many, many times before they could trust their own eyes. There are plenty of other people who won't believe it until they've seen it with theirs.
On a molecular biology level, it's hard to say just what's going on. The authors, according to this news item from Science (probably subscriber-only) say that they've found some rules about which genes will be susceptible to the technique and which won't, which will be released soon. (Translation: as soon as they can be reasonably sure that they won't make fools of themselves - this paper took enough nerve as it is).
The Science article includes a good deal of if-this-holds-up language, which is appropriate for such a weird discovery. (Are the editors there wondering why they didn't get a chance to publish the article themselves, or did they have the chance and turn it down?) At any rate, if-it-holds-up this effect will simultaneously complicate the RNAi field a great deal (it was gnarly enough already, thanks) and also open a door to some really unusual experiments. Upregulating genes isn't very easy, and there are no doubt many ideas that have been waiting on a way to do it. There are therapeutic possibilities, too, naturally - but they'll have to wait on the same difficulties as the other RNA therapies.
Anyway, I'm happy to see this. It opens up some completely new biology, and it opens a door to a potential Nobel for the discoverers should everything work out. And it always cheers me up when something totally unexpected flies down like this and lands on the lawn.
+ TrackBacks (0) | Category: Biological News
October 18, 2006
There's a curious paper (subscriber-only link) in the latest Nature that's getting some attention, titled "A linguistic model for the rational design of antimicrobial peptides". For non-subscribers, here's a synopsis of the work from the magazine's news site.
A group at MIT headed by Gregory Stephanopolous has been studying various antimicrobial peptides, which are secreted by all kinds of organisms as antibiotics. Taking the amino acid sequences of several hundred of these and feeding them into a linguistic pattern-analysing program suggested some common features, which they then used to synthesize 42 new unnatural candidates. The hit rate for these was about 50%, which is far, far more than you'd expect if you weren't tuning in to some sort of useful rules.
It's the concept of "peptide grammar" that seems to be the news hook here. But I'm quite puzzled by all the fuss, because looking for homology among protein sequences is one of the basic bioinformatics tools. I have to wonder what the MIT group found with their linguistics program that they wouldn't have found with biology software. What they're doing is good old structure-activity relationship work, the lifeblood of every medicinal chemist. Well, it's perhaps better described as sequence-activity relationships, but sequence is just a code for structure. There's nothing here that any drug company's bioinformatics people wouldn't be able to do for you, as far as I can see.
So why haven't they? Well, despite the article's mention of a potential 50,000 further peptides of this type, the reason is probably because not many people care. After all, we're talking about small peptides here, of the sort that are typically just awful candidates for real-world drugs. And I'm not just babbling theory here - many people have actually tried for many years now to commercialize various antimicrobial peptides and landed flat on their faces.
You won't see a mention of that history in the Nature news story, unfortunately. They do, to their credit, mention (albeit in the fourth paragraph from the end) that peptides are troublesome development candidates. That's where it also says that there are reports that bacteria can become resistant even to these proteins, which prompts me to remind everyone that bacteria can become resistant to everything short of freshly extruded magma. It's in the very last paragraph of the story, though, that Robert Hancock of UBC in Vancouver says just what I was thinking when I started reading:
(Hancock) questions how different the linguistics technique is from other computational methods used to find similarities between protein sequences. "What's new is the catchy title," he says.
+ TrackBacks (0) | Category: Biological News | Drug Development | Infectious Diseases
March 14, 2006
I received (some time ago) an answer from Miguel Cizin and the folks at Docoop, makers of Neowater. (If you haven't seen the first parts of this story, they're here and here). In that last post, I had a number of look-under-the-hood physical chemistry questions about the stuff, in an attempt to figure out if there's anything to it or not. Here they are in order, with the provided answers:
1. How much of Neowater's characteristics can be explained under the usual framework of colligative properties? That is, by how much is the boiling point of Neowater elevated, and by how much is its freezing point depressed?
The company provided some differential scanning and isothermal titration calorimetry data in response to this, which I appreciate. I'm no expert in this area, but to my eye the ITC plots look broadly similar, but with a noticeably longer half-life to thermal equilibrium in the Neowater runs. (It's not noted what substance was being injected in these experiments).
2. Similarly, what's its vapor pressure at STP? Does it show a negative deviation from Raoult's Law (as you'd expect from the descriptions in the patent of Neowater's structure), and is this deviation much greater than expected given the low levels of particulate matter contained? The literature on the DoCoop web site, I should note, mentions that Neowater evaporates more slowly than regular water.
DoCoop replies: "Neowater indeed evaporates more slowly than regular water, since the water molecules are less available as they are attracted to the charged nanoparticles, hence it takes more energy to dislodge them. The difference in the vapor pressure is one of the mechanisms of action that we use to alter the dynamics of reactions to benefit our customers. You are right, there is a difference in the vapor pressure indeed. We will not enter here into the metrics or actual values, since it is proprietary for use by customers, so we focused our answer on the claim itself only, rather than the detail, and hope you understand us." This isn't as complete an answer as I'd wish for - in fact, it doesn't add anything at all to what we've already been told, and I have a hard time believing that a deviation from Raoult's Law is proprietary information. But we'll let that go for now.
3. In the same vein, what's the surface tension of Neowater as compared to the water it's produced from? I could imagine it going either way - if large clusters of water are occupied around the nanoparticles, the surface layer of water may not form in as ordered a fashion, leading to lower surface tension. On the other hand, if Neowater is better thought of as a collection of larger polar "balls" of hydrated particles, perhaps the value could end up higher.
Answer: "Exactly as you stated above. This is another mechanism of action in Neowater that we use to the benefit of our customers for the enhancement of their reaction. In Neowater, the dynamic range of surface tension is +15% to -15% around 72 dyn."
Actually, I think that should be dyn/cm, and that value is smack on top of the normal values for water (between 72 and 73). We're left to wonder what could cause it to vary higher and lower, though, and to wonder which of my explanations were correct. The DoCoop website has a picture of the stuff on a hydrophilic surface, showing a higher surface tension. I should note that if you want lower values, a drop of detergent will do the job nicely.
4. What's the conductivity of Neowater as compared to its untreated form? How does it change in the presence of small amounts of electrolytes as compared to regular water?
Answer: "Neowater's conductivity is like that in RO or distilled water. Neowater has no ions. It will change if (they're added). We are in the process of starting a research project with a NJ-based University on this application for batteries."
5. Have the rates of standard nucleophilic displacement reactions and/or cycloadditions been measured in Neowater? The presence or absence of a polar transition state and the resultant effect on reaction rate would make an interesting test of its properties. (Neowater is stated to be a "more hydrophobic" form of the liquid). Which reminds me: have Neowater's dipole moment and dielectric constant been determined?
Answer: " Neowater is an irregular media from the point of view of nucleophillic and cycloadditions. We did not find the right method to characterize this irregularity. We are open to suggestions because one of our business opportunities is in crystallization of proteins, where this issue is central. We do see irregularities of the nucleophilic behavior in Neowater with our university partners that are developing this application at the Weizman Institute in Israel. Regarding the dielectric constant measurement, there is a change in it in Neowater vs. regular water. We could not conclude yet the correlation b/w the shift in the structure of the "spinnor network" within Neowater if this is what you are trying to understand."
I would think that if you have a system that shows that Neowater is an "irregular" medium, then you'd have a method to begin characterizing it right there. But I'll wait to see if something comes out of the Weizmann work. For cycloadditions, I'd suggest looking at some of the aqueous Diels-Alder work from the 1980s.
And as for my question #6, about whether deuterated Neowater had been prepared, the company indicates that it hasn't done anything in that direction yet, although they are looking into the idea of using Neowater as an MRI contrast agent.
So, where does this leave us? While I appreciate the company taking time to answer my queries, I can't say that I'm all that much more informed compared to what I'd been able to find out from their press releases. That's clearly the way that they'd like to keep it, which is naturally their right from a business standpoint.
But from the scientific end, I have trouble buying into this "It's all proprietary for the use of our customers and the enhancement of shareholder value" explanation. Because if Neowater were really the sort of breakthrough that DoCoop's material makes it sound like, it would be worth a slew of research papers which would give it more scientific credibility. And since the company has already worked to secure its patent rights, such papers would certainly be feasible - desirable, even, considering the publicity that would follow.
And besides, if you want to know about the effects of nanoparticles in water, you can turn to the people who actually do publish their results. Perhaps any rate enhancement in PCR runs with Neowater is due to enhanced thermal conductivity - after all, temperature cycling is an essential part of the technique. How did I come to this conclusion? By reading this paper on the effects of aqueous nanoparticles on PCR reactions. It's a perfectly reasonable paper, and contains, as far as I can see, more data than DoCoop has ever released.
While we're on that subject, here's a site that will tell you so much about the effect of nanoparticles on thermal conductivity that you'll wish you'd never asked. Similarly, if you'd like to know more about the effect that nanoparticles have on water's surface tension, you could go here. If you wanted to learn more about the properties of water confined to nanoscale environments, you'd get a lot more out of this guy or this one than you would out of DoCoop's literature and patent filings, not that that would be very difficult.
So, all in all, I continue to be not very impressed. If Neowater were the kind of wild breakthrough that the company claims it to be, it would be worth more than its current use as a sort of STP-oil-treatment for PCR reactions. The company can, of course, have the last laugh on me over the next few years, and I wish them luck in doing so. But I'm betting that any breakthroughs in the aqueous nanoparticle area will find their way into the scientific literature in a more convincing fashion.
+ TrackBacks (0) | Category: Biological News
February 2, 2006
Genetic Engineering News is sort of an odd publication. Primarily a vehicle for big, glossy color ads, it publishes some articles of its own: guest editorials, roundups of news from conferences and trade shows, that sort of thing. And it also publishes plenty of things that are (that have to be) slightly rewritten press releases - the sort of articles that start off:
"InterCap Corp. and SynaDynaGen say that their research collaboration on biosecurity proteomics through RNA interference and four-dimensional mass spectrometry, now with the great taste of fish, is yielding results that will make customers roll over on their backs and pant. Speaking at the Weaseltech Investor's Conference, company spokescreatures vowed to. . ."
One of these in the December issue, though, is weird enough that you can hear the editorial staff wrestling with their better selves. Phrases like "The company claims. . ." and "Company spokesmen maintain. . ." keep running through the whole article. It's titled "Water-Based Nanotech for the Life Sciences", and profiles a small Israeli company called (oddly) DoCoop. What DoCoop is selling is water.
But not just any water. . .Neowater! (Trademarked, natch). This is "a stable system of highly hydrated, inert nanoparticles", which supposedly have thousands of ordered hydration shells around them. This, the company says, modifies the bulk properties of the water. And what does that buy you?
Well, according to the company (there, I'm doing it, too), it will do pretty much everything except change the cat's litter box for you. It makes reactions run faster, at lower concentrations. It improves all biochemical assays and molecular biology techniques - PCR, RNA interference, ELISAs, you name it. Brief mentions are made of delivering molecules directly into cells with the stuff. It has applications in diagnostic kits, in drug delivery, in protein purification, and Cthulu only knows what else.
Some of these claims would seem to directly clash with each other. In the space of a few paragraphs, we hear that Neowater behaves "like a strong detergent", but somehow accelerates the growth of bacteria in culture. But at the same time it also prevents the formation of biofilms. And it increases the potency of antibiotics against bacteria, too. How it manages to do these things simultaneously is left, apparently, as an exercise for the reader.
The company claims that it has plenty of customers, and that it's working with several pharmaceutical companies to develop some of these applications. A search through the literature turned up one European molecular biology paper that mentioned using their PCR enhancing kit, so they've sold some Neowater for sure. But I'd like to turn this one over to the readers: have any of you seen this stuff? Know anyone who uses it?
And is everyone else's crank radar pinging as loudly as mine is? The thing is, unless a superior variety has up and evolved on us, cranks don't usually go out and form their own molecular biology reagent companies and place press releases in Genetic Engineering News. I'm profoundly sceptical of the claims this company makes, but I have the feeling that they're sincere in making them. Very odd, very odd indeed.
+ TrackBacks (0) | Category: Biological News
January 9, 2006
Update: Since the site was down most of Tuesday, I'm leaving this post up another day. Things have only worsened since I put it up, though. . .
I've been withholding my comments on the South Korean stem cell controversy, waiting to see how the story finally settled out. Well, it's good and settled now: the entire enterprise was a fraud. Here's a timeline of the whole sorry business, for people who need a recap. Start at the bottom of that page to experience it in the most painfully realistic way.
My first impulse, in the manner of anyone belonging to a group (biomedical researchers) whose reputations have been dented by such a case, is to point out that, yes, "the system worked". The fraudulent research was discovered and rooted out, papers were retracted, funding lost, brows slapped, all of it. And it hasn't taken that long, either. It's useful to point these things out to people who would like to throw mud on the whole enterprise of science.
See, for example, this blog review of a recent book on scientific fraud. Contrary to its repeated assertions, scientists do indeed realize that fraud happens, because every working scientist has seen it. For starters, most large academic departments have tales of grad students or post-docs whose work could never be trusted. And all of us in research have run into papers in the literature whose techniques won't reproduce, no matter what, and the suspicions naturally grow that they were the product of exaggeration or wishful thinking. The number of possible publications sins alone is huge: yields of chemical reactions get padded, claims of novelty and generality get inflated, invalidating research from other labs doesn't get cited.
It's painful for me to admit it, but this kind of thing goes on all the time. And as long as the labs are staffed with humans, we're not going to be able to get rid of it. The best we can do is discourage it and correct it when we can.
But takes me to the second standard impulse that strikes in these situations, which is to ask what in the world these people were thinking. That's what's always puzzled me about major scientific fraud. The more interesting your work is, the more fame you stand to gain from your results, the more certain you are to be found out if you fake it. There are obscure areas that you could forge and fake around in for years, and journals in which you could publish your phony results without anyone ever being the wiser. Of course, by definition those won't do you much good - heck, you might as well do real work by that point.
But faking the big ones, the worldwide-headline national-hero stuff - you can't get away with that for long, and Professor Hwang didn't. The closest parallels I can think of are the recent Jan Hendrik Schoen case and the thirty-year-old Summerlin mouse scandal. (These and several other infamous cases are summarized here and here. I honestly find it hard to believe that there are others of that magnitude that anyone got away with.
I've never been able to imagine the state of mind of someone involved in this kind of thing. There you are, famous for something you've completely made up. In front of you are the cameras and reporters, while behind you, off in the distance, are hundreds of other scientists around the world busily trying to reproduce your amazing results. Every minute, they get closer to finding you out. How can anyone smile for the television crews under such conditions?
It's tempting to speculate about the state of the Korean scientific establishment and the role of Korea culture itself in this latest blowup. But such things have happened everywhere. The Korean factor certainly led to Hwang being an instant national figure with his face on every magazine and a dozen microphones trained on him wherever he turned. But it's not a Korean failing that did him in, it's a human one.
+ TrackBacks (1) | Category: Biological News
December 6, 2005
The medical-blog roundup known as Grand Rounds is up today at Dr. Charles, with a wide selection of good reading.
And this is a good time to announce that I'm going to be hosting the next installment a week from now. Please feel free to send along links to any good blog posts on medical topics - your own, or ones you've come across when you're supposed to be working.
+ TrackBacks (0) | Category: Biological News
November 17, 2004
A notable feature of 21st century molecular biology (so far!) is the emphasis on RNA. I've written before about RNA interference, a hugely popular (and hugely researched) way to silence the expression of proteins in living cells. Wide swaths of academia and industry are now devoted to figuring out all the details of these pathways, key parts of which are built into the cellular machinery. They turn out to regulate gene expression in ways that weren't even thought of before the late 1990s, and I've said for several years now that this field is the most obvious handful of tickets to Stockholm that I've ever seen. (Naturally, there are some worries that the whole field has perhaps been a bit over-promoted. . .)
Shutting off the production of targeted proteins is a wonderful thing, both from the basic research viewpoint and the clinical one. The more control you can have over the process, the better, and RNAi has been extremely promising. But as we're learning more about the system, complications are creeping in. Don't they always. . .
It turns out that the small interfering RNAs that are used, and are supposed to be the most efficacious and the most specific, aren't always what they seem. A disturbing recent study used one targeting luciferase, a firefly protein with no close relatives in the human genome. But applying it to the human-derived HeLa cell line showed effects on over 1800 genes - some of which only showed up at high concentrations, true, but none of these would have shown up at all in the ideal world we might have been living in for a while. There have also been experiments with RNAs that have deliberately made with slight mismatches for their intended target, and some of them work rather too well.
Finally, as I mentioned about a year ago, there are reports that these small RNAs can set off an interferon response, suggesting that the technique can cause cells to respond as if they're under infectious attack. As you'd imagine, this can also complicate the interpretation of an experiment, especially if you're already targeting something that might interact with any of these pathways (and plenty of things do.)
None of these yellow flags are particularly large, but there are several of them now and probably more waiting to be noticed. (A good brief roundup of the situation can be found in the November issue of Trends in Genetics, for those with access.) Perhaps as we learn more we'll find ways to obviate these problems. If there's one thing for sure, it's that we haven't figured out all the tricks that RNA is capable of. But the companies that are racing to get RNAi therapies into the clinic are watching all this a bit nervously, hoping that they're not going to be those fools that you always hear about rushing in.
+ TrackBacks (0) | Category: Biological News
August 17, 2004
I'm going to take off from another comment, this one from Ron, who asks (in reference to the post two days ago): "would it not be fair to say that cellular biochemistry gets even more complicated the more we learn about it?
It would indeed be fair. I think that as a scientific field matures it goes through several stages. Brute-force collection of facts and observations comes early on, as you'd figure. Then the theorizing starts, with better and better theories being honed by more targeted experiments. This phase can be mighty lengthy, depending on the depth of the field and the number of outstanding problems it contains. A zillion inconsistent semi-trivialities can take a long time to sort out (think of the mathematical proof of the Four-Color Theorem), as can a smaller number of profound headscratchers (like, say, a reconciliation of quantum mechanics with relativity as they deal with gravity.)
If the general principles discovered are powerful enough, things can get simpler to understand. Think of the host of problems that early 20th-century physics had, many of which resolved themselves as applications of quantum mechanics. Earlier, chemistry went through something similar earlier, on a smaller scale, with the adoption of the stereochemical principles of van't Hoff. Suddenly, what seemed to be several separate problems turned out to be facets of one explanation: that atoms had regular three-dimensional patterns of bonding to other atoms. (If that sounds too obvious for such emphasis, keep in mind that this notion was fiercely ridiculed at resisted at the time.)
Cell biology is up to its pith helmet in hypotheses, and is nowhere near out of the swamps of fact collection. As in all molecular biology, the sheer number of different systems is making for a real fiesta. Your average cell is a morass of interlocking positive and negative feedback loops, many of which only show up fleetingly, under certain conditions, and in very defined locations. Some general principles have been established, but the number of things that have to be dealt with is still increasing, and I'm not sure when it's going to level out.
For example, the other day a group at Sugen (now Pfizer) published a paper establishing just how many genes there are in mice that code for protein kinase enzymes. Through adding phosphoryl groups, these enzymes are extremely important actors in the activation, transport, and modulation of the activities of thousands upon thousands of other proteins, and it turns out that there are exactly 540 of them. (Doubtless there are some variations as they get turned into proteins, but that's how many genes there are.) And that's that.
Now, that earlier discovery of protein phosphorylation as a signaling mechanism was a huge advance, and it has been appropriately rewarded. And knowing just how many different kinase enzymes there are is a step forward, too. But figuring out all the proteins they interact with, and when, and where, and what happens when they do - well, that's first cousin to hard work.
+ TrackBacks (0) | Category: Biological News | In Silico
April 22, 2004
I mentioned the other day that not everything in that Stuart Schreiber interview sounded sane to me, (although more of it does than I'd expected). The interviewer, Joanna Owens, asks him to expand on a statement he made about ten years ago: famously (in some circles, at any rate) Schreiber said that he wanted to - and thought that eventually he could - produce a small-molecule partner for every human gene.
A worthy goal, to be sure, but a honking big one, too. To his credit, though, Schreiber isn't making light of it:
". . .that challenge understates what we really want to do, which is to use small molecules to modulate the individual function(s) of multifunctional proteins, activating or inactivating individual functions as necessary. This is one of the differences between small molecules, for example, and the knockout of knowckdown technologies, where you inactivate everything to do with the protein of interest."
Note how things have appropriately expanded. There are a lot more proteins than there are genes (a lot more, given the surprisingly lowball figure for the total size of the human genome), and the number of protein activities is several times larger than that. He's absolutely right that this figure is the real bottom line. But here comes that Muhammed Ali side of his personality:
"Small molecules allow you to gain control rapidly, and can be delivered simply but, most importantly, we've shown that we can discover molecules that only modulate one of several functions of a single protein. . .(the scientific community has) identified 5000 out of the required 500,000 small molecules, which is similar to where the Human Genome Project was in year two of its 12-year journey. That might be a useful calibration - optimistically, we're ten years away."
Midway through that paragraph is where I start pulling back on a set of imaginary reins. Whoa up, there, Schreibster! Let's take the assumptions in order:
Small molecules allow you to gain control rapidly. . . Compared to transcription-level technology, this is largely correct. But the effects of small-molecule treatment often take a while to make themselves known, for a variety of reasons that we don't fully understand. The problem's particularly acute in larger systems - look at how long it takes for many CNS drugs to have any meaningful clinical effect. And these complex systems have other weird aspects, which make the phrase "gain control" seem a bit too confident. U-shaped dose-response relationships are common. Look at what you find in toxicology, where you see threshold effects and even hormesis, with large and small doses of the same substance showing opposite effects.
. . .and can be delivered simply. . . Well, when they can be delivered at all, I guess. But there more of them that come bouncing back at us than we'd like. In every drug research program I've been involved with, there are plenty of reasonable-looking compounds that hit the molecular target hard, but then don't perform in the cellular assay. You can come up with a lot of hand-waving rationales: perhaps the main series of compounds is riding in on some sort of active transport and these outliers can't, or they're getting actively pumped back out of the cell, or they hit some other sinkhole binding site that the others escape, and so on. Figuring out what's going on is an entire research project in itself, and rarely undertaken. Every time someone tells me that drug delivery is simple, I can feel my hair begin to frizz.
. . .we've shown that we can discover molecules that only modulate one of several functions of a single protein. . . True enough, and a very interesting accomplishment. But the generality of it is, to put the matter gently, unproven. It would not surprise me at all if there turn out to be many proteins whose functions can't be independently inhibited. The act of binding a small molecule to alter one of the functions would cause the other ones to change. And a bigger problem will be distinguishing these effects from the consequences of actually taking out that first function cleanly: how will you know when you've altered the system?
. . .which is similar to where the Human Genome Project was in year two. . . True, but that and forty dollars will get you an Aldrich Chemical can opener. The comparison isn't just optimistic - it's crazy. The problems that the genome sequencers faced were engineering problems - difficult, tricky, infuriating ones, but with solutions that were absolutely within the realm of possibility. Faster machines were made, with more computing power, and new techniques were applied to make use of them.
But as I've been saying, I'm not sure that the Maximum Inhibitor Library that Schreiber's talking about is even possible at all. Don't get me wrong - I hope that it is. We'll learn so much biochemistry that our heads will hurt. But its feasibility is very much open to question, to many questions, and we won't even begin to know the answers until we've put in a lot more work.
+ TrackBacks (0) | Category: Biological News | Drug Assays | Drug Development
April 13, 2004
You've probably heard of the hypothesis that a reasonable amount of dirt is good for you, especially in childhood. (My kids are certainly taking no chances.) The idea is that the immune system needs a certain amount of challenge to develop properly, so trying to live too antiseptic a life is a mistake. I think that this is very likely correct, and it turns out that it's especially correct if you're a zebrafish.
Not many of my readers are zebrafish, at least as far as I can tell from my referral logs, but they're an influential demographic. Danio rerio isn't as well known outside biology as say, the fruit fly, but it's a workhorse model organism for vertebrate development. Zebrafish are small, fast-growing, and the embryos are nearly transparent in their earlier stages. (Xenopus frogs share these characteristics, and have their partisans, too.)
The March 30th issue of the Proceedings of the National Academy of Sciences, with a Warholian zebrafish cover, features a study from Washington U. where the fish were raised under strictly aseptic (gnotobiotic) conditions. That's not easy to do, but if you make absolutely sure that no bacteria are present, it turns out that the embryos don't even develop properly. The defects are in the gut, which makes a lot of sense.
It turns out that colonization by normal intestinal flora is vital - zebrafish and their bacteria have become evolutionarily entangled. The bacteria actually induce some crucial gene expression by their presence, and the developmental program just doesn't have an aseptic default setting. There hasn't been an aseptic zebrafish since the beginning of biological time.
OK, these guys swim around in tropical pools, floating in a bacterial soup. But we're floating in one, too, just at a slightly lower density. Every part of a human body that can be easily (benignly) colonized by bacteria already is. Are there similar developmental effects in man? It wouldn't surprise me at all. No one's going to be running that exact embryo experiment, needless to say, but there are probably ways to sneak up on the answer using cell cultures. There's never been an aseptic human baby, either. . .although this is enough to make a person wonder about situations where a pregnant mother has had to take a long course of powerful antibiotics.
| Category: Biological News | Infectious Diseases
January 19, 2004
Signaling between cells is weirder than we used to think it was. There's a hardy perennial, all right - that sentence could have been written whenever you like for the past fifty years or so. But the surprises keep on coming. Some of the most intense communication needs are between neurons, as you'd expect, and it looks like nature has taken advantage of all kinds of things to achieve greater bandwidth.
Everyone now learns about nitric oxide and its effects when they study physiology. The thought of a toxic gas as a neurotransmitter was a tough one to deal with at first, but the evidence was overwhelming. Then in the 1990s, two more oddities were proposed in the same category, and they're even more poisonous: hydrogen sulfide and carbon monoxide. Comments of the "You have to be kidding" sort greeted the initial work in this field, but the nitric oxide work had opened the door. It now appears that these two are, indeed, important signaling molecules in the brain.
Hydogen sulfide has a number of physiological effects, other than poisoning you (or, in lower concentrations, making you choke on its delightful aroma.) It seems to act on smooth muscle along with nitric oxide (now, there's a combination I would go out of my way to avoid breathing), and it also seems to have a role in laying down long-term memory. Carbon monoxide also seems to have a number of different functions - it's vasoactive, like its gaseous partners, but also seems to be involved in the immune response and in cellular protection and repair.
Taking up in the same way as the nitric oxide research, which has stimulated a huge amount of drug discovery work over the years, people are now trying similar tricks with these new gases. Look for more and more work on these in the drug industry as their mechanisms get fleshed out.
What's next? Well, it's not impossible that some other small-molecule gases have their own pathways, too. These things have properties that aren't shared by any other molecules, and perhaps they're being put to use. Ammonia, sulfur dioxide, and nitrous oxide have all been proposed as candidates. Thinking along those lines, I have to wonder about the small alkyl derivatives like methylamine and dimethyl sufide, too. Why not? But if someone gets around to claiming chlorine or the other halogens, I'm going to start to wonder. And if there turns out to be a physiological role for the noble gases, I'd start to suspect that Einstein was wrong: maybe God's approach to scientific laws is malicious after all. It would explain a lot, now that I think about it. . .
+ TrackBacks (0) | Category: Biological News
January 18, 2004
Remember the genomics gold rush? Back about five or six years ago? Sure you do! People were lining up to throw money at companies that could deliver human gene sequences, as part of the never-ending search for new drug targets. (OK, it's not quite never-ending, but for the time horizon we have in the industry, we'll stick with that adjective.) Well, a good part of the reasoning behind all those sequencing deals may have just taken another hit.
Even while the genomic craze was at its peak, there were doubters. Gene sequences should, in principle, read out directly into protein sequences. But there are already some complications, since DNA and RNA sequences can, at various points, be spliced and recombined. We think we know the signs of that happening, but it's still another thing to worry about.
But even if there's no funny business, it's not easy getting useful information from a raw sequence. We still can't predict protein structures de novo, not to the degree that medicinal chemistry needs. You can learn a few things - homology statistics can place your unknown protein into a known family, or (failing that) at least tell you stuff like whether it's likely to be membrane-bound. But knowing that something's, say, a G-protein-coupled receptor sure isn't enough to tell you what it does in vivo, or if it's a valid drug target (and if so, for what disease.)
And there's always been a lot more to the cohort of proteins than just the corresponding genomic sequences. That's where the doubting voices got louder. Proteins get modified in all kinds of ways. They get phosphorylated and glycosylated around their outsides, for example, which can profoundly change their function. And they can get sliced up into smaller proteins, too. Happens all the time - plenty of bioactive proteins are produced from a larger percursor, carved off as needed like sandwich meat at the deli counter. (The enzymes that do the carving can be very good drug targets indeed.)
Enter the latest craziness, from J. C. Yang's lab at the National Cancer Institute. There's an exhilarating (or alarming, depending on your point of view) paper in the ">latest issue of Nature (427, 252), whose authors have seen something that no one had ever seen in higher organisms. They've shown that not only can proteins be chopped up in the cell, but that the various fragments can be spliced back together in new combinations. In their case, they showed that cells could produce a nine-amino-acid peptide from a 49-amino-acid precursor. The middle 40 got snipped out, and the two ends were spliced together to make the nine-mer. You're never going to be able to read off the sequence for that one, now, are you?
This sort of thing goes on all the time in single-celled creatures, and is known all the way up to, oh, bean plants. But it had sure never been seen in mammalian cells. How does this process happen, and how important is it? Who knows! It might turn out to be a rare curiosity, or it might turn out to be something really important that we've completely missed seeing all these years. To be sure, no one's reporting coming across a lot of important proteins whose sequences couldn't be matched in the genome somewhere. But there are an awful lot of proteins whose sequence we don't know, so the upper and lower bounds of this new phenomenon are fuzzy.
This paper, you can bet, has already set off a flurry of research. Perhaps there are some unexplained proteomic problems out there which this will turn out to answer. And here's a prediction: it wouldn't surprise me if protein splicing had already been seen by someone else, who looked at the data, thought about it, and said "Naaaah. That can't happen. Must have messed something up somewhere. . ." If this turns out to be physiologically important, it's Nobel material for sure. Listen closely, and you may hear the sound of someone kicking themselves.
+ TrackBacks (0) | Category: Biological News
January 11, 2004
In my industry, you hear a lot of talk about drug targets and their relative chances of success. Targets fall into several broad classes, and when you take a close look, there are clearly some that are easier to hit than others. The G-protein coupled receptors (GPCRs) are one of those (antihistamines and beta-blockers are classic examples), and various hydrolytic enzymes are another (ACE inhibitors, HIV protease inhibitors, PDE inhibitors like Viagra, etc.)
But there are some other categories that are severely under-represented. "Interaction" targets is what I'd call a broad group of these. The ligands for the easier enzymes and GPCRs fit into defined binding pockets, which have evolved for small molecules, It's the old lock-and-key picture. But trying to affect the binding between two proteins, or of a protein with a stretch of DNA/RNA - now, that's something else again. There's no single binding pocket there, at least not on the scale of a drug-sized molecule. Instead of fitting different-shaped keys into existing locks, we're faced with trying to wedge something in between a door and its frame.
It's hard to get in there, and our molecules are often too small to have much effect. But the number of drug targets in this class is huge; we're going to have to come to terms with them eventually. . But for now, one of the best ways is to carefully study the various high-value targets and see if there are some that look more likely to work, given what we already know how to do. That's what a group at Roche has been up to recently, and they've reported their success in an online preprint in Science.
They're after a protein called MDM2, which acts as a brake on the activity of a more famous protein from the p53 tumor-suppression gene. In many cancers, it would be good to block this interaction and get the p53 system as back to being revved-up as possible. (Of course, in many other cancers, this gene has already been taken out of action by one mutation or another, which is probably a key step in their formation. Those won't be candidates for MDM2 blocking therapies.)
In 1996, a group at Sloan-Kettering published an X-ray crystal structure of the two proteins, which showed that there was a fairly clear pocket that seemed responsible for a lot of the binding. It looked like a possible candidate for a small molecule, but this is the first report of real success in targeting it (although others are hard at work.) The Roche group found some polyaryl imidazoline structures through high-throughput screening that seem to do the job. One of them is even orally active in a rodent tumor model, which is quite an accomplishment. And as proof of the mechanism, the compounds are inactive against those cancer cell lines that have already lost their p53 gene.
This is good news, since we can always use another route to cancer therapy. But I'm not sure how broadly applicable this is going to be. I'm sure that there will be talk of new interest in protein-protein drug targets, but this one is (unfortunately) an anomaly. That type of small, reasonably well-defined pocket that plays a role here doesn't show up that often, and it's not like people haven't been looking. News that these things can succeed will stimulate more work in the area, true. But that's where a lot of the effort was going already, because other protein-protein targets have seemed destined to fail.
My mental picture of those targets is of two oil tankers slowly coming together, brought closer as dozens of small grappling hooks whiz out and clang onto different parts of their decks. With a small molecule, we're trying to interfere with that by sticking a fishing boat in between them. Not easy, but we're going to have to figure it out eventually. Protein-protein interactions are a hot topic these days (go off and Google "proteomics", but stand well clear while you do it!) so we're bound to learn a lot more in the next few years.
For now, congratulations to Roche as they move forward toward the clinic. They'll be the first to find out what blocking MDM2 binding is going to do to animals - how well it'll treat those with cancer, and what side effects it might have on those without. I hope there's daylight in between those two groups!
+ TrackBacks (0) | Category: Biological News | Cancer
December 9, 2002
I'll bet that this is the only hit for a Google search for that word! I typed it out as I was thinking about how some major classes of biomolecules - protein, carbohydrate, lipid, and nucleic acid - are perceived. If you look at the number of papers published, and the number of details worked out in their field, you'd think that proteins are the single most important constituent of a living cell, followed by DNA / RNA. Are they?
I think this is partly an artifact of how easy things are to work with. There are only five purine / pyrimidine bases used in all DNA and RNA, and only two sugars (ribose or deoxyribose.) That level of simplicity is what's allowed sequencing techniques to become so automated so quickly. I'm not saying that there isn't plenty of complexity in the area - you get all sorts of hard-to-sequence hairpins and the like - but having only a few building blocks has helped enormously.
Proteins are the next step up. There are twenty-odd amino acids that you have to worry about, which gets pretty combinatorially complicated. (If you wanted to make, say 100 milligrams for your compound files of every 20-amino-acid protein combination there is, you'd run into a severe problem having to do with the amount of available carbon on earth.) Direct protein sequencing can be done, but it's nowhere near as easy as it is for DNA. Proteins have the advantage of being much easier to handle than nucleic acids, though, and many of them are robust enough to stand all kinds of mistreatment. That helped biochemists get a good start on enzymes before any other aspect of molecular biology got on its feet at all.
So, how about lipids? Here's where things start to get ugly. There are a *lot* more than 20 or 30 kinds of lipid molecules in a living system - all sorts of chain lengths, unsaturations, cis/trans isomers, mono-di-and-triglycerides and so on.(I won't even get into phosphorylation, since that's a big variable in the protein world, too.) And what about steroids, prostaglandins, and all the other lipid-derived stuff? All of these things are a real a pain to work with, too, since they're often found transiently or at very small concentration and their solubility is almost always awful by definition. It takes some really good techniques to separate the various lipid constituents out of the greasy mess.
And carbohydrates? I worked a lot with smaller ones in my graduate school days, and people still look at me funny for it. Sugars are as bad as they come for complexity - there are plenty of them, and they can be connected any number of ways to make macromolecules. By contrast, proteins are basically linear front-to-back chains (curled up, twisted, fractal-dimension space-filling chains, but chains nonetheless.) Complex carbohydates branch out all over the place, and they'll really make your life miserable. Despite years of work, there's not a general way (yet) to automatically sequence one, although the situation is getting better. But if we had to depend on carbohydrate sequencing to read the genetic code, we'd be up the creek for sure. Their physical properties can be quite squirreley, too, making them very little fun to purify.
So there are at least two important classes of biomolecules that probably don't get their due, because they're a lot more hostile to work with. And that should tell you how well we can handle the mixes between them - glycosylated proteins, nucleic-acid protein complexes, lipid conjugates. Pretty poorly, is how. It's a mess out there.
+ TrackBacks (0) | Category: Biological News
October 14, 2002
There's been a flurry of news about gene therapy, a high-risk high-reward area of research from the very beginning. The biggest success stories came recently in the treatment of X-linked severe combined immunodeficiency (SCID,) the so-called "bubble boy" disease. But the course of true therapy never did run smooth, and there have been potentially dire complications.
SCID is fortunately rare, because it's a bad-news condition. Patients are essentially left without a functioning immune system, which makes everyone in that position die early from opportunistic infections. The sort of thing that would give a healthy individual a nasty cough for a few days is a fatal illness if you don't have T-cells and their partners. The most common genetic defect that lead to this condition is a loss of the enzyme adenosine deaminase, but there are several others that will put you in the same boat. The recent good news/bad news incidents concen SCID which was mediated by a loss of a protein called gc (for gamma-chain,) which is involved in cytokine signaling. There are some significant differences in trying to treat these two varieties, but gc-loss is probably easier to treat (a relative judgement if ever there was one; they're both tough.)
The standard therapy is bone marrow transplantation. This uses tissue from a matched healthy donor, usually after some level of intentional destruction of the existing marrow. When things really are matched identically, the prognosis is excellent, but the problem is that finding such a tissue match isn't always easy. A lesser degree of similarity, HLA-haploidentical tissue matching, is the next option. Survival rates in those cases are lower, although still around 75%, which most surely beats an early and certain death. But these patients don't usually get the full range of their immune response back. Specifically, B cells and NK cells aren't restored to normal levels, and even T-cell counts can start falling with time.
So there's room for improvement, and if you're a patient for whom no good tissue match exists, there's room for a lot of improvement. Thus gene therapy. The basic idea is similar to using bone marrow from a donor, only you donate your own marrow, newly refurbished, to yourself. The original marrow cells are replaced with genetically altered cells which have had the proper gene spliced into them.
Which sounds reasonably simple, but getting the gene into the cells is the voodoo part of the whole sequence. There are any number of ways of doing that, each with their known advantages and disadvantages, and each with plenty of unknown things waiting to emerge. Much of the progress in gene therapy has come from refining the vectors used to introduce the genes, but it's still a pretty crude process. In the standard method a crippled form of a retrovirus is used, one without RNA sequences for some key proteins that it would need to reproduce itself.
The problem is, these retroviruses go around jamming in genetic material all over the place. Sometimes it'll end up in a place where it can get transcribed into active protein, and sometimes it won't. If it inserts right into the middle of some key cellular gene that has to be read off later, the cell will probably die when it tries to do that. You just incubate as many stem cells as you can get, and hope for the best.
In several of the patients, that's what they got. They seem to have completely restored immune systems, a first for non-tissue-matching SCID patients. But in one case, the gene appears to have inserted itself into precisely the wrong place, making nonsense out of a gene that codes for a known growth-checking protein called LMO-2. This could have happened in only one cell out of the entire transplant, but one cell is enough. Loss of this protein has sent it into full-tilt reproduction and growth, which is another word for cancer. A new man-made form of leukemia was the result.
Analysis of the proliferating T-cells showed that, indeed, the necessary protein had the viral sequences wedged into it. The boy involved has a family history of a higher incidence of tumors, and he had a chicken-pox infection after his transplant (which must have been a scary test of its efficacy.) Either of these could have made the situation worse. He's receiving chemotherapy now, and as of last report the prognosis is cautiously optimism that the rogue cells can be brought under control.
So, does this stop the gene therapy world in its tracks? Not at all, as it turns out. In what I think is a very realistic risk/reward appraisal, an FDA advisory committee met last week and decided to press on with such experiments in the US. After all, it's the only chance these patients have. And a pediatric oncologist for the National Cancer Institute put it well: "If we threw out every therapy in cancer that causes cancer," she said, "we would get rid of some of our most effective ones." For better or worse, that's the state of the art. Good luck to all involved.
+ TrackBacks (0) | Category: Biological News