Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Autism | Biological News | Birth of an Idea »

October 21, 2014

Oxygenated Nanobubbles. For Real?

Email This Entry

Posted by Derek

A longtime reader sent along this article, just based on the headline. "This headline triggers instant skepticism in me", he said, and I agree. "Potential to treat Alzheimer's" is both a bold and a weaselly statement to make. The weasel part is that sure, anything has the "potential" to do that, but the boldness lies in the fact that so far, nothing ever has. There are a couple of very weak symptomatic treatments out there, but as far as actually addressing the disease, the clinical success rate is a flat zero. But that's not stopping these folks:

“The impact of RNS60 on Alzheimer’s disease as outlined in our studies presents new opportunities for hope and deeper research in treating a disease that currently cannot be prevented, cured or even successfully managed,” said Dr. Kalipada Pahan, professor of neurological sciences, biochemistry and pharmacology and the Floyd A. Davis, M.D., endowed chair of neurology at the Rush University Medical Center. “Our findings sparked tremendous excitement for RNS60, identifying an opportunity for advanced research to develop a novel treatment to help the rapidly increasing number of Alzheimer’s disease and dementia patients.”

Well, good luck to everyone. But what, exactly, is RNS60, and who is Revalesio, the company developing it? I started reading up on that, and got more puzzled the further I went. That press release described RNS60 as "a therapeutic saline containing highly potent charge-stabilized nanostructures (CSNs) that decrease inflammation and cell death." That didn't help much. Going to the company's web site, I found this:

Revalesio is developing a novel category of therapeutics for the treatment of inflammatory diseases using its proprietary charge-stabilized nanostructure (CSN) technology. Revalesio’s products are created using a patented device that generates rotational forces, cavitation and high-energy fluid dynamics to create unique, stable nanostructures in liquids. CSNs are less than 100 nanometers in size (for reference, the width of a single strand of hair is 100,000 nanometers) and are established through the combination of an ionic scaffold and a nano-sized oxygen bubble core.

RNS60 is Revalesio’s lead product candidate based upon CSN technology. RNS60 is normal, medical-grade, isotonic saline processed with Revalesio’s technology. RNS60 does not contain a traditional active pharmaceutical ingredient and offers a unique and groundbreaking approach to treating diseases . . .

OK, then. If I'm getting this right, this is saline solution with extremely small bubbles of oxygen in it. I'm not familiar with the "nanobubble" literature, so I can't say if these things exist or not. I'm unwilling to say that they don't, because lot of odd things happen down at that small scale, and water is a notoriously weird substance. The size of the bubbles they're talking about would be what, a few hundred oxygen molecules across? Even proving that these structures exist and characterizing them would presumably be a major challenge, analytically, but I have some more reading to do on all that.

My problem is that there have been many, many, odd water products reported over the years that involve some sort of nanostructure in the solution phase. And by "odd", I mean fraudulent. Just do a quick Google search for any combination of phrases in that area, and the stuff will come gushing out - all sorts of claims about how the water being sold is so, so, different, because it has different clusters and layers and what have you. My second problem is that there have been many, many odd products reported over the years that claim to be some sort of "oxygenated" water. Do a Google search for that, but stand back, because you're about to be assaulted by page after page of wild-eyed scam artists. Super-oxygen miracle water has been a stable of the health scam business for decades now.

So the Revalesio people have a real challenge on their hands to distinguish themselves from an absolute horde of crackpots and charlatans. The web site says that these oxygen nanobubbles "have a stabilizing effect on the cell membrane", which modulates signaling of the PI3K pathway. The thing is, there are a number of publications on this stuff, in real journals, which is not the sort of thing you find for your typical Internet Wonder Water. The president of the company is a longtime Eli Lilly executive as well, which is also rather atypical for the fringe. Here's one from J. Biol. Chem., and here's one on nanoparticle handing from the Journal of Physical Chemistry. The current neuronal protection work is in two papers in PLOS ONE, here and here.

I'm baffled. These papers talk about various cellular pathways being affected (PI3K, ATP production, phosphorylation of tau, NF-kb activation, and so on), which is a pretty broad range of effects. It's a bit hard to see how something with such effects could always be positive, but paper after paper talks about benefits for models of Parkinson's, multiple sclerosis, exercise physiology, and now Alzheimer's. A common thread could indeed be inflammation pathways, though, so I can't dismiss these mechanisms out of hand. But then there's this paper, which says that drinking this water after exercise improves muscle recovery, and I'm just having all kinds of trouble picturing how these nanostructured bubbles make it intact out of the gut and into the circulation. If they're sticking all over cell membranes, don't they do that to every cell they come in contact with? Are there noticeable effects in the gut wall or the vascular endothelium? What are the pharmacokinetics of nanobubbles of oxygen, and how the heck do you tell (other than maybe a radiolabel?) I'm writing this blog entry on the train, where I don't have access to all these journal articles, but it'll be interesting to see how these things are addressed. (If I were running a program like this one, and assuming that my head didn't explode from all the cognitive dissonance, I'd be trying it out in Crohn's and IBD, I think - or do all the nanobubbles get absorbed before they make it to the colon?)

So I'm putting this out there to see if everyone else gets the same expressions on their faces as I do when I look this over. Anyone have any more details on this stuff?

Comments (91) + TrackBacks (0) | Category: Biological News

October 16, 2014

The Electromagnetic Field Stem Cell Authors Respond

Email This Entry

Posted by Derek

The authors of the ACS Nano paper on using electromagnetic fields to produce stem cells have responded on PubPeer. They have a good deal to say on the issues around the images in their paper (see the link), and I don't think that argument is over yet. But here's what they have on criticisms of their paper in general:

Nowhere in our manuscript do we claim “iPSCs can be made using magnetic fields”. This would be highly suspect indeed. Rather, we demonstrate that in the context of highly reproducible and well-established reprogramming to pluripotency with the Yamanaka factors (Oct4, Sox2, Klf4, and cMyc/or Oct4 alone), EMF influences the efficiency of this process. Such a result is, to us, not surprising given that EMF has long been noted to have effects on biological system(Adey 1993, Del Vecchio et al. 2009, Juutilainen 2005)(There are a thousand of papers for biological effects of EMF on Pubmed) and given that numerous other environmental parameters are well-known to influence reprogramming by the Yamanaka factors, including Oxygen tension (Yoshida et al. 2009), the presence of Vitamin C (Esteban et al. 2010), among countless other examples.

For individuals such as Brookes and Lowe to immediately discount the validity of the findings without actually attempting to reproduce the central experimental finding is not only non-scientific, but borders on slanderous. We suggest that these individuals take their skepticism to the laboratory bench so that something productive can result from the time they invest prior to their criticizing the work of others.

That "borders on slanderous" part does not do the authors any favors, because it's a rather silly position to take. When you publish a paper, you have opened the floor to critical responses. I'm a medicinal chemist - no one is going to want to let me into their stem cell lab, and I don't blame them. But I'm also familiar with the scientific literature enough to wonder what a paper on this subject is doing in ACS Nano and whether its results are valid. I note that the paper itself states that ". . .this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency."

If it makes the authors feel better, I'll rephrase: their paper claims that iPSCs can be made more efficiently by adding electromagnetic fields to the standard transforming-factor mixture. (And they also claim that canceling out the Earth's magnetic field greatly slows this process down). These are very interesting and surprising results, and my first impulse is to wonder if they're valid. That's my first impulse every time I read something interesting and surprising, by the way, so the authors shouldn't take this personally.

There are indeed many papers in PubMed on the effects of electromagnetic fields on cellular processes. But this area has also been very controversial, and (as an outside observer) my strong impression is that there have been many problems with irreproducibility. I have no doubt that people with expertise in stem cell biology will be taking a look at this report and trying to reproduce it as well, and I am eager to see what happens next.

Comments (27) + TrackBacks (0) | Category: Biological News | The Scientific Literature

October 14, 2014

Electromagnetic Production of Stem Cells? Really?

Email This Entry

Posted by Derek

Now this is an odd paper: its subject matter is unusual, where it's published is unusual, and it's also unusual that no one seems to have noticed it. I hadn't, either. A reader sent it along to me: "Electromagnetic Fields Mediate Efficient Cell Reprogramming into a Pluripotent State".

Yep, this paper says that stem cells can be produced from ordinary somatic cells by exposure to electromagnetic fields. Everyone will recall the furor that attended the reports that cells could be reprogrammed by exposure to weak acid baths (and the eventual tragic collapse of the whole business). So why isn't there more noise around this publication?

One answer might be that not many people who care about stem cell biology read ACS Nano, and there's probably something to that. But that immediately makes you wonder why the paper is appearing there to start with, because it's also hard to see how it relates to nanotechnology per se. An uncharitable guess would be that the manuscript made the rounds of several higher profile and/or more appropriate journals, and finally ended up where it is (I have no evidence for this, naturally, but I wouldn't be surprised to hear that this was the case).

So what does the paper itself have to say? It claims that "extremely low frequency electromagnetic fields" can cause somatic cells to transform into pluripotent cells, and that this process is mediated by EMF effects on a particular enzyme, the histone methyltransferase MII2. That's an H3K4 methyltransferase, and it has been found to be potentially important in germline stem cells and spermatogenesis. Otherwise, I haven't seen anyone suggesting it as a master regulator of stem cell generation, but then, there's a lot that we don't know about epigenetics and stem cells.

There is, however, a lot that we do know about electromagnetism. Over the years, there have been uncountable reports of biological activity for electromagnetic fields. You can go back to the controversy over the effects of power lines in residential areas and the later disputes about the effects of cell phones, just to pick two that have had vast amounts of coverage. The problem is, no one seems to have been able to demonstrate anything definite in any of these cases. As far as I know, studies have either shown no real effects, or (when something has turned up), no one's been able to reproduce it. That goes both for laboratory studies and for attempts at observational or epidemiological studies, too: nothing definite, over and over.

There's probably a reason for that. I have trouble with is the mechanism by which an enzyme gets induced by low-frequency electromagnetic fields, and that's always been the basic argument against such things. You almost have to assume new physics to make a strong connection, because nothing seems to fit: the energies involved are too weak, the absorptions don't match up, and so on. Or at least that's what I thought, but this paper has a whole string of references about how extremely low-frequency electromagnetic fields do all sorts of things to all sorts of cell types. But it's worth noting that the authors also reference papers showing that they're linked to cancer epidemiology, too. It's true, though, that if you do a Pubmed search for "low frequency electromagnetic field" you get a vast pile of references, although I'm really not sure about some of them.

The authors say that the maximum effect in their study was seen at 50 Hz, 1 mT. That is indeed really, really low frequency - the wavelength for a radio signal down there is about 6000 kilometers. Just getting antennas to work in that range is a major challenge, and it's hard for me to picture how subcellular structures could respond to these wavelengths at all. There seem to be all sorts of theories in the literature about how enzyme-level and transcription-level effects might be achieved, but no consensus (from what I can see). Most of the mechanistic discussions I've seen avoid the question entirely - they talk about what enzyme system or signaling pathway might be the "mechanism" for the reported effects, but skip over the big question of how these effects might arise in the first place.

An even odder effect reported in this paper is that the authors also tried these in an experimental setup (a Helmholz coil) that canceled out the usual environment of the Earth's magnetic field. They found that this worked much less efficiently, and suggest that the natural magnetic field must have epigenetic effects. I don't know what to make of that one, either. Normal cells grown under these conditions showed no effects, so the paper hypothesizes that some part of the pluripotency reprogramming process is exceptionally sensitive. Here, I'll let the authors summarize:

As one of the fundamental forces of nature, the EMF is a physical energy produced by electrically charged objects that can affect the movement of other charged objects in the field. Here we show that this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency. Exposure of cell cultures to EMFs significantly improves reprogramming efficiency in somatic cells. Interestingly, EL-EMF exposure combined with only one Yamanaka factor, Oct4, can generate iPSCs, demonstrating that EL-EMF expo- sure can replace Sox2, Klf4, and c-Myc during reprogramming. These results open a new possibility for a novel method for efficient generation of iPSCs. Although many chemical factors or additional genes have been reported for the generation of iPSCs, limitations such as integration of foreign genetic elements or efficiency remain a challenge. Thus, EMF-induced cell fate changes may eventually provide a solution for efficient, noninvasive cell reprogramming strategies in regenerative medicine.

Interestingly, our results show that ES cells and fibroblasts themselves are not significantly affected by EMF exposure; rather, cells undergoing dramatic epigenetic changes such as reprogramming seem to be uniquely susceptible to the effects of EMFs. . .

I don't know what to make of this paper, or the whole field of research. Does anyone?

Update: PubPeer is now reporting some problems with images in the paper. Stay, uh, tuned. . .

Comments (36) + TrackBacks (0) | Category: Biological News

October 8, 2014

XKCD on Protein Folding

Email This Entry

Posted by Derek

I've been meaning to mention this recent XKCD comic, which is right on target:
"Someone may someday find a harder one", indeed. . .

Protein folding

Comments (25) + TrackBacks (0) | Category: Biological News

The 2014 Chemistry Nobel: Beating the Diffraction Limit

Email This Entry

Posted by Derek

This year's Nobel prize in Chemistry goes to Eric Betzig, Stefan Hell, and William Moerner for super-resolution fluorescence microscopy. This was on the list of possible prizes, and has been for several years now (see this comment, which got 2 out of the 3 winners, to my 2009 Nobel predictions post). And it's a worthy prize, since it provides a technique that (1) is useful across a wide variety of fields, from cell biology on through chemistry and into physics, and (2) does so by what many people would, at one time, would have said was impossible.

The impossible part is beating the diffraction limit. That was first worked out by Abbe in 1873, and it set what looked like a physically impassable limit to the resolution of optical microscopy. Half the wavelength of the light you're using is as far as you can go, and (unfortunately) that means that you can't optically resolve viruses, many structures inside the cell, and especially nothing as small as a protein molecule. (As an amateur astronomer, I can tell you that the same limits naturally apply to telescope optics, too: even under perfect conditions, there's a limit to how much you can resolve at a given wavelength, which is why even the Hubble telescope can't show you Neil Armstrong's footprint on the moon). In any optical system, you're doing very well if the diffraction limit is the last thing holding you back, but hold you back it will.
STED.jpg
There are several ways to try to sneak around this problem, but the techniques that won this morning are particularly good ones. Stefan Hell worked out an ingenious method called stimulated emission depletion (STED) microscopy. If you have some sort of fluorescent label on a small region of a sample, you get it to glow, as usual, by shining a particular wavelength of light on it. The key for STED is that if another particular wavelength of light is used at the same time, you can cause the resulting fluorescence to shift. Physically, fluorescence results when electrons get excited by light, and then relax back to where they were by emitting a different (longer) wavelength. If you stimulate those electrons by catching them once they're already excited by the first light, they fall back into a higher vibrational state than they would otherwise, which means less of an energy gap, which means less energetic light is emitted - it's red-shifted compared to the usual fluorescence. Pour enough of that second stimulating light into the system after the first excitation, and you can totally wipe out the normal fluorescence.

And that's what STED does. It uses the narrowest possible dot of "normal" excitation in the middle, and surround that with a doughnut shape of the second suppressing light. Scanning this bulls-eye across the sample gives you better-than-diffraction-limit imaging for your fluorescent label. Hell's initial work took several years just to realize the first images, but the microscopists have jumped on the idea over the last fifteen years or so, and it's widely used, with many variations (multiple wavelength systems at the same time, high frames-per-second rigs for recording video, and so on). There's a STED image of a labeled neurofilament compared to the previous state of the art. You'd think that this would be an obvious and stunning breakthrough that would speak for itself, but Hell himself is glad to point out that his original paper was rejected by both Nature and Science.
STED%20image.jpg
You can, in principle, make the excitation spot as small as you wish (more on this in the Nobel Foundation's scientific background on the prize here). In practice, the intensity of the light needed as you push to higher and higher resolution tends to lead to photobleaching of the fluorescent tags and to damage in the sample itself, but getting around these limits is also an active field of research. As it stands, STED already provides excellent and extremely useful images of all sorts of samples - many of those impressive fluorescence microscopy shots of glowing cells are produced this way.

The other two winners of the prize worked on a different, but related technique: single-molecule microscopy. Back in 1989, Moerner's lab was the first to be able to spectroscopically distinguish single molecules outside the gas phase - pentacene, imbedded in crystals of another aromatic hydrocarbon (terphenyl), down around liquid helium temperatures. Over the next few years, a variety of other groups reported single-molecule studies in all sorts of media, which meant that something that would have been thought crazy or impossible when someone like me was in college was now popping up all over the literature.

But as the Nobel background material rightly states, there are some real difficulties with doing single-molecule spectroscopy and trying to get imaging resolution out of it. The data you get from a single fluorescent molecule is smeared out in a Gaussian (or pretty much Gaussian) blob, but you can (in theory) work back from that to where the single point must have been to give you that data. But to do that, the fluorescent molecules have to scattered apart further than that diffraction limit. Fine, you can do that - but that's too far apart to reconstruct a useful image (Shannon and Nyquist's sampling theorem in information theory sets that limit).

Betzig himself took a pretty unusual route to his discovery that gets around this problem. He'd been a pioneer in another high-resolution imaging technique, near-field microscopy, but that one was such an impractical beast to realize that it drove him out of the field for a while. (Plenty of work continues in that area, though, and perhaps it'll eventually spin out a Nobel of its own). As this C&E News article from 2006 mentions, he. . .took some time off:

After a several-year stint in Michigan working for his father's machine tool business, Betzig started getting itchy again a few years ago to make a mark in super-resolution microscopy. The trick, he says, was to find a way to get only those molecules of interest within a minuscule field of view to send out enough photons in such a way that would enable an observer to precisely locate the molecules. He also hoped to figure out how to watch those molecules behave and interact with other proteins. After all, says Betzig, "protein interactions are what make life."

Betzig, who at the time was a scientist without a research home, knew also that interactions with other researchers almost always are what it takes these days to make significant scientific or technological contributions. Yet he was a scientist-at-large spending lots of time on a lakefront property in Michigan, often in a bathing suit. Through a series of both deliberate and accidental interactions in the past two years with scientists at Columbia University, Florida State University, and the National Institutes of Health, Betzig was able to assemble a collaborative team and identify the technological pieces that he and Hess needed to realize what would become known as PALM.

He and Hess actually built the first instrument in Hess's living room, according to the article. The key was to have a relatively dense field of fluorescent molecules, but to only have a sparse array of them emitting at any one time. That way you can build up enough information for a detailed picture through multiple rounds of detection, and satisfy both limits at the same time. Even someone totally outside the field can realize that this was a really, really good plan. Betzig describes very accurately the feeling that a scientist gets when an idea like this hits: it seems so simple, and so obvious, that you're sure that everyone else in the field must have been hit by it at the same time, or will be in the next five minutes or so. In this case, he wasn't far off: several other groups were working on similar schemes while he and Hess were commandeering space in that living room. (Here's a video of Hess and Betzig talking about their collaboration).
PALM.jpg
Shown here is what the technique can accomplish - this is from the 2006 paper in Science that introduced it to the world. Panel A is a section of a lysozome, with a labeled lysozyme protein. You can say that yep, the enzyme is in the outer walls of that structure (and not so many years ago, that was a lot to be able to say right there). But panel B is the same image done through Betzig's technique, and holy cow. Take a look at that small box near the bottom of the panel - that's shown at higher magnification in panel D, and the classic diffraction limit isn't much smaller than that scale bar. As I said earlier, if you'd tried to sell people on an image like this back in the early 1990s, they'd probably have called you a fraud. It wasn't thought possible.

The Betzig technique is called PALM, and the others that came along at nearly the same time are STORM, fPALM, and PAINT. These are still being modified all over the place, and other techniques like total internal reflection fluorescence (TIRF) are providing high resolution as well. As was widely mentioned when green fluorescent protein was the subject of the 2008 Nobel, we are currently in a golden (and green, and red, and blue) age of cellular and molecular imaging. (Here's some of Betzig's recent work, for illustration). It's wildly useful, and today's prize was well deserved.

Comments (42) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News

September 16, 2014

Update on Alnylam (And the Direction of Things to Come)

Email This Entry

Posted by Derek

Here's a look from Technology Review at the resurgent fortunes of Alnylam and RNA interference (which I blogged about here).

But now Alnylam is testing a drug to treat (familial amyloid polyneuropathy) in advanced human trials. It’s the last hurdle before the company will seek regulatory approval to put the drug on the market. Although it’s too early to tell how well the drug will alleviate symptoms, it’s doing what the researchers hoped it would: it can decrease the production of the protein that causes FAP by more than 80 percent.

This could be just the beginning for RNAi. Alnylam has more than 11 drugs, including ones for hemophilia, hepatitis B, and even high cholesterol, in its development pipeline, and has three in human trials —progress that led the pharmaceutical company Sanofi to make a $700 million investment in the company last winter. Last month, the pharmaceutical giant Roche, an early Alnylam supporter that had given up on RNAi, reversed its opinion of the technology as well, announcing a $450 million deal to acquire the RNAi startup Santaris. All told, there are about 15 RNAi-based drugs in clinical trials from several research groups and companies.

“The world went from believing RNAi would change everything to thinking it wouldn’t work, to now thinking it will,” says Robert Langer, a professor at MIT, and one of Alnylam’s advisors.

Those Phase III results will be great to see - that's the real test of a technology like this one. A lot of less daring ideas have fallen over when exposed to that much of a reality check. If RNAi really has turned the corner, though, I think it could well be just the beginning of a change coming over the pharmaceutical industry. Biology might be riding over the hill, after an extended period of hearing hoofbeats and seeing distant clouds of dust.

There was a boom in this sort of thinking during the 1980s, in the early days of Genentech and Biogen (and others long gone, like Cetus). Proteins were going to conquer the world, with interferon often mentioned as the first example of what was sure to be a horde of new drugs. Then in the early 1990s there was a craze for antisense, which was going to remake the whole industry. Antibodies, though, were surely a big part of the advance scouting party - many people are still surprised when they see how many of the highest-grossing drugs are antibodies, even though they're often for smaller indications.

And the hype around RNA therapies did reach a pretty high level a few years ago, but this (as Langer's quote above says) was followed by a nasty pullback. If it really is heading for the big time, then we should all be ready for some other techniques to follow. Just as RNAi built on the knowledge gained during the struggle to realize antisense, you'd have to think that Moderna's mRNA therapy ideas have learned from the RNAi people, and that the attempts to do CRISPR-style gene editing in humans have the whole biologic therapy field to help them out. Science does indeed march on, and we might possibly be getting the hang of some of these things.

And as I warned in that last link, that means we're in for some good old creative destruction in this industry if that happens. Some small-molecule ideas are going to go right out the window, and following them (through a much larger window) could be the whole rare-disease business model that so many companies are following these days. Many of those rare diseases are just the sorts of things that could be attacked more usefully at their root cause via genomic-based therapies, so if those actually start to work, well. . .

This shouldn't be news to anyone who's following the field closely, but these things move slowly enough that they have a way of creeping up on you unawares. Come back in 25 years, and the therapeutic landscape might be a rather different-looking place.

Comments (18) + TrackBacks (0) | Category: Biological News | Business and Markets | Clinical Trials | Drug Development

August 14, 2014

Proteins Grazing Against Proteins

Email This Entry

Posted by Derek

A huge amount of what's actually going on inside living cells involves protein-protein interactions. Drug discovery, for obvious reasons, focuses on the processes that depend on small molecules and their binding sites (thus the preponderance of receptor ligands and enzyme inhibitors), but small molecules are only part of the story in there.

And we've learned a fair amount about all this protein-protein deal-making, but there's clearly a lot that we don't understand at all. If we did, perhaps we'd have more compounds that can target them. Here's a very basic topic about which we know very little: how tight are the affinities between all these interacting proteins? What's the usual level, and what's the range? What does the variation in binding constants say about the signaling pathways involved, and the sorts of binding surfaces that are being presented? How long do these protein complexes last? How weak can one of these interactions be, and still be physiologically important?

A new paper has something to say about that last part. The authors have found a bacterial system where protein phosphorylation takes place effectively although the affinity between the two partners (KD) is only around 25 millimolar. That's very weak indeed - for those outside of drug discovery, small-molecule drug affinities are typically well over a million times that level. We don't know how common or important such weak interactions are, but this work suggests that we're going to have to look pretty far up the scale in order to understand things, and that's probably going to require new technologies to quantify such things. Unless we figure out that huge, multipartner protein dance that's going on, with all its moves and time signatures, we're not going to understand biochemistry. The Labanotation for a cell would be something to see. . .

Comments (4) + TrackBacks (0) | Category: Biological News | Chemical Biology

July 18, 2014

Thalidomide, Bound to Its Target

Email This Entry

Posted by Derek

There's a new report in the literature on the mechanism of thalidomide, so I thought I'd spend some time talking about the compound. Just mentioning the name to anyone familiar with its history is enough to bring on a shiver. The compound, administered as a sedative/morning sickness remedy to pregnant women in the 1950s and early 1960s, famously brought on a wave of severe birth defects. There's a lot of confusion about this event in the popular literature, though - some people don't even realize that the drug was never approved in the US, although this was a famous save by the (then much smaller) FDA and especially by Frances Oldham Kelsey. And even those who know a good amount about the case can be confused by the toxicology, because it's confusing: no phenotype in rats, but big reproductive tox trouble in mice and rabbits (and humans, of course). And as I mentioned here, the compound is often used as an example of the far different effects of different enantiomers. But practically speaking, that's not the case: thalidomide has a very easily racemized chiral center, which gets scrambled in vivo. It doesn't matter if you take the racemate or a pure enantiomer; you're going to get both of the isomers once it's in circulation.

The compound's horrific effects led to a great deal of research on its mechanism. Along the way, thalidomide itself was found to be useful in the treatment of leprosy, and in recent years it's been approved for use in multiple myeloma and other cancers. (This led to an unusual lawsuit claiming credit for the idea). It's a potent anti-angiogenic compound, among other things, although the precise mechanism is still a matter for debate - in vivo, the compound has effects on a number of wide-ranging growth factors (and these were long thought to be the mechanism underlying its effects on embryos). Those embryonic effects complicate the drug's use immensely - Celgene, who got it through trials and approval for myeloma, have to keep a very tight patient registry, among other things, and control its distribution carefully. Experience has shown that turning thalidomide loose will always end up with someone (i.e. a pregnant woman) getting exposed to it who shouldn't be - it's gotten to the point that the WHO no longer recommends it for use in leprosy treatment, despite its clear evidence of benefit, and it's down to just those problems of distribution and control.

But in 2010, it was reported that the drug binds to a protein called cereblon (CRBN), and this mechanism implicated the ubiquitin ligase system in the embryonic effects. That's an interesting and important pathway - ubiquitin is, as the name implies, ubiquitous, and addition of a string of ubiquitins to a protein is a universal disposal tag in cells: off to the proteosome, to be torn to bits. It gets stuck onto exposed lysine residues by the aforementioned ligase enzyme.

But less-thorough ubiquitination is part of other pathways. Other proteins can have ubiquitin recognition domains, so there are signaling events going on. Even poly-ubiquitin chains can be part of non-disposal processes - the usual oligomers are built up using a particular lysine residue on each ubiquitin in the chain, but there are other lysine possibilities, and these branch off into different functions. It's a mess, frankly, but it's an important mess, and it's been the subject of a lot of work over the years in both academia and industry.

The new paper has the crystal structure of thalidomide (and two of its analogs) bound to the ubiquitin ligase complex. It looks like they keep one set of protein-protein interactions from occurring while the ligase end of things is going after other transcription factors to tag them for degradation. Ubiquitination of various proteins could be either up- or downregulated by this route. Interestingly, the binding is indeed enantioselective, which suggests that the teratogenic effects may well be down to the (S) enantiomer, not that there's any way to test this in vivo (as mentioned above). But the effects of these compounds in myeloma appear to go through the cereblon pathway as well, so there's never going to be a thalidomide-like drug without reproductive tox. If you could take it a notch down the pathway and go for the relevant transcription factors instead, post-cereblon, you might have something, but selective targeting of transcription factors is a hard row to hoe.

Comments (9) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Chemical News | Toxicology

July 17, 2014

TDP-43 and Alzheimer's

Email This Entry

Posted by Derek

There are quite a few headlines today about a link between Alzheimer's and a protein called TDP-43. This is interesting stuff, but like everything else in the neurodegeneration field, it's going to be tough to unravel what's going on. This latest work, just presented at a conference in Copenhagen, found (in a large post mortem brain study of people with diagnosed Alzheimer's pathology) that aberrant forms of the protein seem to be strongly correlated with shrinkage of the hippocampus and accompanying memory loss.

80% of the cohort with normal TDP-43 (but still showing Alzheimer's histology) had cognitive impairment at death, but 98% of the ones with TDP-43 mutations had such signs. That says several things: (A) it's possible to have classic Alzheimer's without mutated TDP-43, (B) it's possible to have classic Alzheimer's tissue pathology (up to a point, no doubt) without apparent cognitive impairment, and (C) it's apparently possible (although very unlikely) to have mutated TDP-43, show Alzheimer's pathology as well, and still not be diagnosed as cognitively impaired. Welcome to neurodegeneration. Correlations and trends are mostly what you get in that field, and you have to make of them what you can.

TDP-43, though, has already been implicated, for some years now, in ALS and several other syndromes, so it really does make sense that it would be involved. It may be that it's disproportionately a feature of more severe Alzheimer's cases, piling on to some other pathology. Its mechanism of action is not clear yet - as mentioned, it's a transcription factor, so it could be involved in stuff from anywhere and everywhere. It does show aggregation in the disease state, but that Cell paper linked to above makes the case that it's not the aggregates per se that are the problem, but the loss of function behind them (for example, there are increased amounts of the mutant protein out in the cytoplasm, rather than in the nucleus). What those lost functions are, though, remains to be discovered.

Comments (2) + TrackBacks (0) | Category: Alzheimer's Disease | Biological News

July 14, 2014

Modifying Red Blood Cells As Carriers

Email This Entry

Posted by Derek

What's the best carrier to take some sort of therapeutic agent into the bloodstream? That's often a tricky question to work out in animal models or in the clinic - there are a lot of possibilities. But what about using red blood cells themselves?

That idea has been in the works for a few years now, but there's a recent paper in PNAS reporting on more progress (here's a press release). Many drug discovery scientists will have encountered the occasional compound that partitions into erythrocytes all by itself (those are usually spotted by their oddly long half-lives after in vivo dosing, mimicking the effect of plasma protein binding). One of the early ways that people have attempted to try this deliberately was forcing a compound into the cells, but this tends to damage them and make them quite a bit less useful. A potentially more controllable method would be to modify the surfaces of the RBCs themselves to serve as drug carriers, but that's quite a bit more complex, too. Antibodies have been tried for this, but with mixed success.

That's what this latest paper addresses. The authors (the Lodish and Ploegh groups at Whitehead/MIT) introduce modified surface proteins (such as glycophorin A) that are substrates for Ploegh's sortase technology (two recent overview papers), which allows for a wide variety of labeling.

Experiments using modified fetal cells in irradiated mice gave animals that had up to 50% of their RBCs modified in this way. Sortase modification of these was about 85% effective, so plenty of label can be introduced. The labeling process doesn't appear to affect the viability of the cells very much as compared to wild-type - the cells were shown to circulate for weeks, which certainly breaks the records held by the other modified-RBC methods.

The team attached either biotin tags and specific antibodies to both mouse and human RBCs, which would appear to clear the way for a variety of very interesting experiments. (They also showed that simultaneous C- and N-terminal labeling is feasible, to put on two different tags at once). Here's the "coming attractions" section of the paper:

he approach presented here has many other possible applications; the wide variety of possible payloads, ranging from proteins and peptides to synthetic compounds and fluorescent probes, may serve as a guide. We have conjugated a single-domain antibody to the RBC surface with full retention of binding specificity, thus enabling the modified RBCs to be targeted to a specific cell type. We envision that sortase-engineered cells could be combined with established protocols of small-molecule encapsulation. In this scenario, engineered RBCs loaded with a therapeutic agent in the cytosol and modified on the surface with a cell type-specific recognition module could be used to deliver payloads to a precise tissue or location in the body. We also have demonstrated the attachment of two different functional probes to the surface of RBCs, exploiting the subtly different recognition specificities of two distinct sortases. Therefore it should be possible to attach both a therapeutic moiety and a targeting module to the RBC surface and thus direct the engineered RBCs to tumors or other diseased cells. Conjugation of an imaging probe (i.e., a radioisotope), together with such a targeting moiety also could be used for diagnostic purposes.

This will be worth keeping an eye on, for sure, both as a new delivery method for small (and not-so-small) molecules, fof biologics, and for its application to all the immunological work going on now in oncology. This should keep everyone involved busy for some time to come!

Comments (7) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

July 8, 2014

An Alzheimer's Blood Test? Not So Fast.

Email This Entry

Posted by Derek

There all all sorts of headlines today about how there's going to be a simple blood test for Alzheimer's soon. Don't believe them.

This all comes from a recent publication in the journal Alzheimer's and Dementia, from a team at King's College (London) and the company Proteome Sciences. It's a perfectly good paper, and it does what you'd think: they quantified a set of proteins in a cohort of potential Alzheimer's patients and checked to see if any of them were associated with progression of the disease. From 26 initial protein candidates (all of them previously implicated in Alzheimer's), they found that a panel of ten seemed to give a prediction that was about 87% accurate.

That figure was enough for a lot of major news outlets, who have run with headlines like "Blood test breakthrough" and "Blood test can predict Alzheimer's". Better ones said something more like "Closer to blood test" or "Progress towards blood test", but that's not so exciting and clickable, is it? This paper may well represent progress towards a blood test, but as its own authors, to their credit, are at pains to say, a lot more work needs to be done. 87%, for starters, is interesting, but not as good as it needs to be - that's still a lot of false negatives, and who knows how many false positives.

That all depends on what the rate of Alzheimer's is in the population you're screening. As Andy Extance pointed out on Twitter, these sorts of calculations are misunderstood by almost everyone, even by people who should know better. A 90 per cent accurate test on a general population whose Alzheimer's incidence rate is 1% would, in fact, be wrong 92% of the time. Here's a more detailed writeup I did in 2007, spurred by reports of a similar Alzheimer's diagnostic back then. And if you have a vague feeling that you heard about all these issue (and another blood test) just a few months ago, you're right.

Even after that statistical problem, things are not as simple as the headlines would have you believe. This new work is a multivariate model, because a number of factors were found to affect the levels of these proteins. The age and gender of the patient were two real covariants, as you'd expect, but the duration of plasma storage before testing also had an effect, as did, apparently, the center where the collection was done. That does not sound like a test that's ready to be rolled out to every doctor's office (which is again what the authors have been saying themselves). There were also different groups of proteins that could be used for a prediction model using the set of Mild Cognitive Impairment (MCI) patients, versus the ones that already appeared to show real Alzheimer's signs, which also tells you that this is not a simple turn-the-dial-on-the-disease setup. Interestingly, they also looked at whether adding brain imaging data (such as hippocampus volume) helped the prediction model. This, though, either had no real effect on the prediction accuracy, or even reduced it somewhat.

So the thing to do here is to run this on larger patient cohorts to get a more real-world idea of what the false negative and false positive rates are, which is the sort of obvious suggestion that is appearing in about the sixth or seventh paragraph of the popular press writeups. This is just what the authors are planning, naturally - they're not the ones who wrote the newspaper stories, after all. This same collaboration has been working on this problem for years now, I should add, and they've had ample opportunity to see their hopes not quite pan out. Here, for example, is a prediction of an Alzheimer's blood test entering the clinic in "12 to 18 months", from . . .well, 2009.

Update: here's a critique of the statistical approaches used in this paper - are there more problems with it than were first apparent?

Comments (32) + TrackBacks (0) | Category: Alzheimer's Disease | Analytical Chemistry | Biological News

June 12, 2014

Amylin Fibrils and Aspirin: No Connection

Email This Entry

Posted by Derek

There have been reports that the classic oral antiinflammatory drugs (such as aspirin and ketoprofen) might slow the progress of diabetes. The rationale is that they seem, at physiological concentrations, to inhibit the formation of amyloid-type fibrils in islet cells, and that's thought to be part of their loss of function.

But a new paper in ACS Chemical Biology seems to demolish that whole idea, and it's a good thing for anyone studying protein aggregation to have a look at. The authors (from NYU and Stony Brook) do a thorough job on the protein behavior, studying it by a wider ranger of techniques than the previous work (right-angle light scattering, transmission electron microscopy (TEM), and others). Earlier work relied on CD spectra (which may have been misinterpreted) and Congo Red staining, which is a classic method for detecting amyloids of various origins, but is also subject to a lot of problems. This latest paper finds no effect for the anti-inflammatories, and suggests that the earlier results are experimental artifacts. TEM is apparently the way to go if you really want to be sure that you're looking at fibrils.

Protein aggregation is definitely not my field, but I was glad to take a look at this paper anyway. It highlights the trickiness of working with these things, and shows how easy it is to get fooled by apparently positive results. These are lessons that apply to plenty of other areas of research, and we could all benefit by keeping them in mind. Whack your hypothesis as hard as you can, with as many experimental techniques as you can. If it breaks apart, that's too bad - but you're better off knowing that, and so is everyone else.

Comments (5) + TrackBacks (0) | Category: Biological News

June 2, 2014

Single-Cell Compound Measurements - Now In A Real Animal

Email This Entry

Posted by Derek

parp%20cells%20copy.jpg
Last year I mentioned an interesting paper that managed to do single-cell pharmacokinetics on olaparib, a poly(ADP) ribose polymerase 1 (PARP1) inhibitor. A fluorescently-tagged version of the drug could be spotted moving into cells and even accumulating in the nucleus. The usual warnings apply: adding a fluorescent tag can disturb the various molecular properties that you're trying to study in the first place. But the paper did a good set of control experiments to try to get around that problem, and this is still the only way known (for now) to get such data.

The authors are back with a follow-up paper that provides even more detail. They're using fluorescence polarization/fluorescence anisotropy microscopy. That can be a tricky technique, but done right, it provides a lot of information. The idea (as the assay-development people in the audience well know) is that when fluorescent molecules are excited by polarized light, their emission is affected by how fast they're rotating. If the rotation is slowed down to below the fluorescence lifetime of the molecules (as happens when they're bound to a protein), then you see more polarization in the emitted light, but if the molecules are tumbling around freely, that's mostly lost. There are numerous complications - you need to standardize each new system according to how much things change in increasingly viscous solutions, the fluorophores can't get too close together, you have to be careful with the field of view in your imaging system to avoid artifacts - but that's the short form.

In this case, they're using near-IR light to do the excitation, because those wavelengths are well known to penetrate living cells well. Their system also needs two photons to excite each molecule, which improves signal-to-noise and the two-photon dye is a BODIPY compound. These things have been used in fluorescence studies with wild abandon for the past few years - at one point, I was beginning to think that the acronym was a requirement to get a paper published in Chem. Comm. They have a lot of qualities (cell penetration, fluorescence lifetime, etc.) that make them excellent candidates for this kind of work.

This is the same olaparib/BODIPY hybrid used in the paper last year, and you see the results. The green fluorescence is nonspecific binding, while the red is localized to the nuclei, and doesn't wash out. If you soak the cells with unlabeled olaparib beforehand, though, you don't see this effect at all, which also argues for the PARP1-bound interpretation of these results. This paper takes things even further, though - after validating this in cultured cells, they moved on to live mice, using an implanted window chamber over a xenograft.

And they saw the same pattern: quick cellular uptake of the labeled drug on infusion into the mice, followed by rapid binding to nuclear PARP1. The intracellular fluorescence then cleared out over a half-hour period, but the nuclear-bound compound remained, and could be observed with good signal/noise. This is the first time I've seen an experiment like this. Although it's admittedly a special case (which takes advantage of a well-behaved fluorescently labeled drug conjugate, to name one big hurdle), it's a well-realized proof of concept. Anything that increases the chances of understanding what's going on with small molecules in real living systems is worth paying attention to. It's interesting to note, by the way, that the olaparib/PARP1 system was also studied in that recent whole-cell thermal shift assay technique, which does not need modified compounds. Bring on the comparisons! These two techniques can be used to validate each other, and we'll all be better off.

Comments (4) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

No More Acid Stem Cells

Email This Entry

Posted by Derek

In case you hadn't seen it, the "acid-washed stem cells" business has gone as far into the dumper as it can possibly go. It now appears that the whole thing was a fraud, from start to finish - if that's not the case, I'll be quite surprised, anyway. The most senior author of the (now retracted) second paper, Teruhiko Wakayama, has said that he doesn't believe its results:

The trigger, he told Bioscience, was his discovery—which he reported to Riken a few weeks ago--that two key photos in the second paper were wrong. Obokata, lead author on both papers, had in April been found by Riken guilty of misconduct on the first paper: the falsification of a gel electrophoresis image proving her starting cells were mature cells, and the fabrication of images proving resulting STAP stem cells could form the three major tissue types of the body.

But Riken had not yet announced serious problems with the second paper.

Last week, however, there was a flurry of activity in the Japanese press, as papers reported that two photos—supposed to show placenta made from STAP cells, next to placenta made from embryonic stem (ES) cells—were actually photos of the same mouse placenta.

As with so many cases before this one, we now move on (as one of Doris Lessing's characters once put it) to having interesting thoughts about the psychology of lying. How and why someone does this sort of thing is, I'm relieved to say, apparently beyond me. The only way I can remotely see it is if these results were something that a person thought were really correct, but just needed a bit more work, which would be filled in in time to salvage everything. But how many times have people thought that? And how does it always seem to work out? I'm back to being baffled. The stem cell field has attracted its share of mentally unstable people, and more.

Comments (13) + TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature

May 28, 2014

The Science Chemogenomics Paper is Revised

Email This Entry

Posted by Derek

The Science paper on chemogenomic signatures that I went on about at great length has been revised. Figure 2, which drove me and every other chemist who saw it up the wall, has been completely reworked:

To improve clarity, the authors revised Fig. 2 by (i) illustrating the substitution sites of fragments; (ii) labeling fragments numerically for reference to supplementary materials containing details about their derivation; and (iii) representing the dominant tautomers of signature compounds. The authors also discovered an error in their fragment generation software that, when corrected, resulted in slightly fewer enriched fragments being identified. In the revised Fig. 2, they removed redundant substructures and, where applicable, illustrated larger substructures containing the enriched fragment common among signature compounds.

Looking it over in the revised version, it is indeed much improved. The chemical structures now look like chemical structures, and some of the more offensive "pharmacophores" (like tetrahydrofuran) have now disappeared. Several figures and tables have been added to the supplementary material to highlight where these fragments are in the active compounds (Figure S25, an especially large addition), and to cross-index things more thoroughly.

So the most teeth-gritting parts of the paper have been reworked, and that's a good thing. I definitely appreciate the work that the authors have put into making the work more accurate and interpretable, although these things really should have been caught earlier in the process.

Looking over the new Figure S25, though, you can still see what I think are the underlying problems with the entire study. That's the one where "Fragments that are significantly enriched in specific sets of signature compounds (FDR ≤ 0.1 and signature compounds fraction ≥ 0.2) are highlighted in blue within the relevant signature compounds. . .". It's a good idea to put something like that in there, but the annotations are a bit odd. For example, the compounds flagged as "6_cell wall" have their common pyridines highlighted, even though there's a common heterocyclic core that that all but one those pyridines are attached to (it only varies by alkyl substitutents). That single outlier compound seems to be the reason that the whole heterocycle isn't colored in - but there are plenty of other monosubstituted pyridines on the list that have completely different signatures, so it's not like "monosubstituted pyridine" carries much weight. Meanwhile, the next set ("7_cell wall") has more of the exact same series of heterocycles, but in this case, it's just the core heterocycle that's shaded in. That seems to be because one of them is a 2-substituted isomer, while the others are all 3-substituted, so the software just ignores them in favor of coloring in the central ring.

The same thing happens with "8_ubiquinone biosynthesis and proteosome". What gets shaded in is an adamantane ring, even though every single one of the compounds is also a Schiff base imine (which is a lot more likely to be doing something than the adamantane). But that functional group gets no recognition from the software, because some of the aryl substitution patterns are different. One could just as easily have colored in the imine, though, which is what happens with the next category ("9_ubiquinone biosynthesis and proteosome"), where many of the same compounds show up again.

I won't go into more detail; the whole thing is like this. Just one more example: "12_iron homeostasis" features more monosubstituted pyridines being highlighted as the active fragment. But look at the list: there's are 3-aminopyridine pieces, 4-aminomethylpyridines, 3-carboxylpyridines, all of them substituted with all kinds of stuff. The only common thread, according to the annotation software, is "pyridine", but those are, believe me, all sorts of different pyridines. (And as the above example shows, it's not like pyridines form some sort of unique category in this data set, anyway).

So although the most eye-rolling features of this work have been cleaned up, the underlying medicinal chemistry is still pretty bizarre, at least to anyone who knows any medicinal chemistry. I hate to be this way, but I still don't see anyone getting an awful lot of use out of this.

Comments (6) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | The Scientific Literature

May 12, 2014

DMSO Will Ruin Your Platinum Drugs. Take Heed.

Email This Entry

Posted by Derek

Here's the sort of experimental detail that can destroy a whole project if you're not aware of it. The platinum chemotherapy drugs are an odd class of things compared to the more typical organic compounds, but it's for sure that many of the people using them in research aren't aware of all of their peculiarities. One of those has been highlighted recently, and it's a sneaky one.

DMSO is, of course, the standard solvent used to take up test compounds for pharmacological assays. It's water-miscible and dissolves a huge range of organic compounds. Most of the time it's fine (unless you push its final concentration too high in the assay). But it's most definitely not fine for the platinum complexes. This paper shows that DMSO displaces the starting ligands, forming a new platinum complex that does not show the desired activity in cells. What's more, a look through the literature shows that up to one third of the reported in vitro studies on these compounds used DMSO to dissolve them, which throws their conclusions immediately into doubt. And since nearly half the papers did not even mention the solvent used, you'd have to think that DMSO showed up a good amount of the time in those as well.

What's even more disturbing is that these sorts of problems were first reported over twenty years ago, but it's clear that this knowledge has not made it into general circulation. So the word needs to get out: never dissolve cisplatin (or the related complexes) in DMSO, even though that might seem like the obvious thing to do. Editors and referees should take note as well.

Comments (14) + TrackBacks (0) | Category: Biological News | Drug Assays

May 8, 2014

Oh Brave New World

Email This Entry

Posted by Derek

So since we've already had some real synthetic biology this morning, how about some wild-eyed fantasy? Because that's what this is (link courtesy of Chemjobber). Here you go, see what you think:

Molecular biologist and futurist Andrew Hessel . . .envisions a world in which every individual receives pharmaceutical drugs perfectly formulated to their genetic and medical needs for a fraction of what treatment would currently cost. . .

(Says Hessel): "I’m driven by the idea that one day anyone with cancer, no matter what or where, (can get) a cheap genetic screen and some sort of molecular pathology … and have a computer generated medicine generated in hours. It’s not sold to them, it’s just available on subscription. Have you seen Dallas Buyers Club? Don’t sell the medicine, sell the subscription. The biggest change here is that you can make medicines for one person, so the blockbuster model no longer applies."

Well, that's all right, then. For some values of "one day", he might even be correct. But the problem with interviews like this (which the early-1990s issues of Wired used to specialize in) is that they make all this stuff sound like it's just about to start rolling out of someone's garage. Once again, we see the difference between the computer-based tech world, where software (and sometimes hardware) can indeed roll out of someone's garage, and biomedical research, where it ain't so swift.

As you read further, you realize that the "medicines" he's talking about are engineered viruses, which he seems to believe are such well-validated tools in oncology and infectious disease that they're ready for everyone to start cranking them out and dosing themselves in the spare bedroom. Really, the only thing standing in the way are the old fossils who don't understand how the world works today:

There is a challenge in getting people beyond the automatic fear of the word virus or the engineering thereof. Certainly, the kind of idea that anyone is going to use these tools to make Ebola or smallpox is kind of laughable. I think we’ve figured out those blocks. But, the potentials for positive engineering are so dramatic, particularly when it comes to making medicines.

I think it’s going to take a generational change in the end. Younger people aren’t afraid of these technologies in the same way that people my age typically are. They grow up learning about molecular biology in school. So, on those fronts, they’re much more comfortable with the technology and also with the idea of sharing. You know, the privacy issues are very different for younger people today than for older people.

I don't really know how to respond to this stuff. I mean, I get flak from people sometimes for being too optimistic and too enthusiastic about new stuff I just read about in the literature, but sheesh - this guy, and his tribe (which is not a small one) are ascending into the clouds and leaving people like me reading the vapor trails. This, to me, is a perfect example of someone who thinks that the way that (say) Twitter and Instagram were founded represents some sort of massive shift in human history, and that somehow the world has transformed into a big digital playground where cool coders make reality do what they want. This sort of talk is what an even-more-curmudgeonly Tom Wolfe once described as "digibabble".

Code is mighty and code is powerful, but it's mighty and powerful because we humans all like to use these little electrical-powered boxes to talk to each other. But if you look through a microscope or a telescope, you can see that Man is not the measure of all things, and the universe is full of objects that don't know or care about what humans like to do. Viruses are very much in that category. They're not just little bits of software, not quite, even though they might be the closest approximation to it in the real world.

But that's not close enough. That's why (just to pick one example) I don't actually find the idea of someone engineering an awful infectious virus to be "kind of laughable". In a world where doomsday cults have released home-made nerve gas into the Tokyo subway, where raving lunatics armed with machine guns and RPGs abduct schoolgirls by the hundreds to sell them off, where. . .well, do I have to go on? You can read the headlines as well as I can. It seems abundantly clear to me, even with my sunny disposition, that there are people in this world who would rejoice at the idea of being able to engineer their own Sneezing Ebola, and the idea gives me the willies. Let's hope some of those crazy social-media kids, the ones who, like, grew up hearing about molecular biology, can then whip up an open-source vaccine and tell the rest of us about it with Vine or something!

Postscript: at the same time, I actually think that genie of distributed biotechnology (like most genies) will resist being stuffed back into the bottle. On balance, I still think that it's a good thing, but there are some very big things on both sides of that balance. Previous rantings, of varying severity, about the general topic of reality versus digi-hype can be found here, here, here, here, and here.

Update: I wanted to highlight this comment by Markysparky, because it's so darn excellent:

"Classic riddles become more interesting in this guy's reality...

Q: How to cross a river in a double-occupancy rowboat with a wolf, a goat, and a cabbage?

A: Engineer a GMO cabbage that produces a goat-repellant molecule. Train the wolf to be vegetarian through an artificial intelligence-powered app on your iPad. Erect a solar farm to power an electric fence to contain the goat. Use drones to move items instead of rowboats. Make goat-armor with 3D printing. Use Uber to summon additional boats. etc etc"

Comments (30) + TrackBacks (0) | Category: Biological News

Artificial Base Pairs in Living Cells

Email This Entry

Posted by Derek

Synthetic biology seems to have taken another big step. Many labs over the years have tried out expanding the genetic code in various ways, but all these run in various in vitro systems. Now the first organism has been engineered with a working unnatural base pair, according to this paper in Nature from the Romesburg group at Scripps.
base%20pair.png
The base pair in question is d5SICS and dNaM, shown at left, and a history of how they were developed is here. This class of interaction was found by screening thousands of possible combinations, and it's notable that there's no hydrogen bonding going on between the two residues. (It's worth keeping in mind that the current AT/CG base pairing system was presumably also arrived at by screening a wide variety of candidates until something worked!)

There are a number of tricky steps needed to get this to work:

However, expansion of an organism’s genetic alphabet presents new and unprecedented challenges: the unnatural nucleoside triphosphates must be available inside the cell; endogenous polymerases must be able to use the unnatural triphosphates to faithfully replicate DNA containing the UBP within the complex cellular milieu; and finally, the UBP must be stable in the presence of pathways that maintain the integrity of DNA.

A transporter spliced in from algae can bring in the unnatural triphosphates, as it turns out, but a next step would be getting enzymatic machinery inside the cell to make them. But the existing enzymes can handle them once they're available, and replicate plasmids containing these pairs, which also don't get tagged as DNA errors and snipped out by any of the endogenous repair mechanisms. So another bridge has indeed been crossed.

Romesburg has started a company, Synthorx, to try to take advantage of the chemical biology possibilities in this work. (I realize that I'm probably supposed to think "Syntho-Rx" when I see that, but my brain persists in saying "Syn-thorks".) I can imagine, down the road, some very interesting assay development possibilities that follow from this technique, with what might be very high signal/noise ratios, so this is worth keeping an eye on.

Update: a criticism of the press coverage of this paper, which has indeed not been very well informed.

Comments (10) + TrackBacks (0) | Category: Biological News | Life As We (Don't) Know It

May 6, 2014

CRISPR In the Courts

Email This Entry

Posted by Derek

Here's an article from the Independent on the legal battles that are underway about CRISPR technology. On one level, it can be a somewhat ugly story, but it also shows how much of a discovery the technique has been, that people are willing to fight for the rights to it so vigorously. But it's going to take a lot of straightening out:

On the one side is a consortium of world-class researchers led by French-born Professor Emmanuelle Charpentier who made a key discovery behind the Crispr gene editing technique and has been promised $25m (£16m) by a group of venture capitalists to commercialise her invention for medical use.

On the other side is her former colleague and the co-discoverer of the gene-editing process, Professor Jennifer Doudna of the University of California, Berkeley, who has joined a rival consortium of researchers with $43m in venture capital to advance the Crispr technique into the clinic.

Each group has recruited a formidable panel of senior scientists as advisers. The Charpentier team, called Crispr Therapeutics, includes Nobel Laureate Craig Mello, the co-discoverer of a gene-silencing technique known as RNAi, and Daniel Anderson of the Massachusetts Institute of Technology, who was the first person to show that Crispr can cure a genetic disease in an adult animal.

Meanwhile the Doudna team, known as Editas Medicine, includes the Harvard geneticist George Church, a pioneer in synthetic biology, and Feng Zhang of MIT and the Broad Institute, who successfully managed to get Crispr to work in human cells and was this month awarded the first US patent on the technique – much to the dismay of Professor Charpentier.

Another crack at human gene therapy, that's one of the biggest engines driving all this. I hope that the legal wrangling doesn't slow that down. . .

Comments (11) + TrackBacks (0) | Category: Biological News | Patents and IP

May 5, 2014

Young Blood

Email This Entry

Posted by Derek

Anti-aging studies, when they make the news, fall into three unequal categories. There's a vast pile of quackery, which mercifully isn't (for the most part) newsworthy. There are studies whose conclusions are misinterpreted by some reporters, or overblown by one party or another. And there's a small cohort of really interesting stuff.

Yesterday's news in the field very much looks like it belongs in that last set. Two papers (here and here) came out early in Science that result from long-running research programs on what happens when young mice and old mice have their circulatory systems joined together, coming from the labs of Amy Wagers and Richard Lee at Brigham and Women's Hospital in Boston, and Lee Rubin's group at Harvard. Wagers herself started on this work as a postdoc at Stanford almost fifteen years ago, and she clearly hit on a project with some real staying power. A third new paper in Nature Medicine, from Tony Wyss-Coray's group at Stanford, also bears on the same topic (see below).

The aged rodents seem to benefit from exposure to substances in the youthful blood, and one of these seems to be a protein called GDF11. Wagers and Lee had already reported that administering this protein alone can ameliorate age-related changes in rodent heart muscle, and these latest papers extend the effects to skeletal muscle (both baseline performance and recovery from injury) and to brain function (specifically olfactory sensing and processing, which mice put a lot of effort into).

So the natural thought is to give aging humans the homolog of GDF11 and see what happens, and it wouldn't surprise me if someone in Boston ponies up the money to try it. You might need a lot of protein, though, and there's no telling how often you'd need infusions of it, but to roll back aging people would presumably put up with quite a bit of inconvenience. Another approach, which is also being pursued, is the dig-into-the-biology route, in an attempt to figure out what GDF11's signaling pathways are and which ones are important for the anti-aging effects. That's when the medicinal chemists will look up from the bench, because there might be some small-molecule targets in there.

That's going to be a long process, though, most likely. GDF11 seems to have a lot of different functions. Interestingly, it's actually known as an inhibitor of neurogenesis, which might be a quick illustration of how much we don't know about it and its roles. It would seem very worthwhile to try to sort these things out, but there are a lot of worthwhile biochemical pathways whose sorting-out is taking a while.

The Wyss-Coray paper goes in the other direction, though. Building on earlier work of their own, they've seen beneficial effects on the hippocampus of older mice after the circulatory connection with younger animals, but were able to reproduce a fair amount of that by just injecting younger blood plasma itself. This makes you wonder if the "teenage transfusion" route might a much more simple way to go - simple enough, in fact, that I'm willing to put down money on the possibility of some experimentally-minded older types trying it out on their own very shortly. Wyss-Coray is apparently planning a clinical trial as we speak, having formed a company called Alkahest for just that purpose. Since blood plasma is given uncounted thousands of times a day in every medical center in the country, this route should have a pretty easy time of it from the FDA. But I'd guess that Alkahest is still going to have to identify specific aging-related disease states for its trials, because aging, just by itself, has no regulatory framework for treatment, since it's not considered a disease per se. The FDA has consistently avoided going into making-normal-people-better territory, not that I can blame them, but they may not be able to dodge the question forever. At least, I hope they won't be able to. You also have to wonder what something like this would do to the current model of blood donation and banking, if it turns out that plasma from an 18-year-old is worth a great deal more than plasma from a fifty-year-old. I hope that the folks at the Red Cross are keeping up with the literature.

Irreverent aside: (Countess Báthory, an apparent pioneer in this field whose dosing protocols were suboptimal, does not seem to be cited in any of the press reports I've seen. Not sure about her publication record, though - maybe she's hard to reference from the primary literature.

Comments (14) + TrackBacks (0) | Category: Aging and Lifespan | Biological News

April 14, 2014

More on the Science Chemogenomic Signatures Paper

Email This Entry

Posted by Derek

phenol%20equil.png
This will be a long one. I'm going to take another look at the Science paper that stirred up so much comment here on Friday. In that post, my first objection (but certainly not my only one) was the chemical structures shown in the paper's Figure 2. A number of them are basically impossible, and I just could not imagine how this got through any sort of refereeing process. There is, for example, a cyclohexadien-one structure, shown at left, and that one just doesn't exist as such - it's phenol, and those equilibrium arrows, though very imbalanced, are still not drawn to scale.
subst%20align.png
Well, that problem is solved by those structures being intended as fragments, substructures of other molecules. But I'm still positive that no organic chemist was involved in putting that figure together, or in reviewing it, because the reason that I was confused (and many other chemists were as well) is that no one who knows organic chemistry draws substructures like this. What you want to do is put dashed bonds in there, or R groups, as shown. That does two things: it shows that you're talking about a whole class of compounds, not just the structure shown, and it also shows where things are substituted. Now, on that cyclohexadienone, there's not much doubt where it's substituted, once you realize that someone actually intended it to be a fragment. It can't exist unless that carbon is tied up, either with two R groups (as shown), or with an exo-alkene, in which case you have a class of compounds called quinone methides. We'll return to those in a bit, but first, another word about substructures and R groups.
THF%20R%20group.png
Figure 2 also has many structures in it where the fragment structure, as drawn, is a perfectly reasonable molecule (unlike the example above). Tetrahydrofuran and imidazole appear, and there's certainly nothing wrong with either of those. But if you're going to refer to those as common fragments, leading to common effects, you have to specify where they're substituted, because that can make a world of difference. If you still want to say that they can be substituted at different points, then you can draw a THF, for example, with a "floating" R group as shown at left. That's OK, and anyone who knows organic chemistry will understand what you mean by it. If you just draw THF, though, then an organic chemist will understand that to mean just plain old THF, and thus the misunderstanding.

If the problems with this paper ended at the level of structure drawing, which many people will no doubt see as just a minor aesthetic point, then I'd be apologizing right now. Update: although it is irritating. On Twitter, I just saw that someone spotted "dihydrophyranone" on this figure, which someone figured was close enough to "dihydropyranone", I guess, and anyway, it's just chemistry. But they don't. It struck me when I first saw this work that sloppiness in organic chemistry might be symptomatic of deeper trouble, and I think that's the case. The problems just keep on coming. Let's start with those THF and imidazole rings. They're in Figure 2 because they're supposed to be substructures that lead to some consistent pathway activity in the paper's huge (and impressive) yeast screening effort. But what we're talking about is a pharmacophore, to use a term from medicinal chemistry, and just "imidazole" by itself is too small a structure, from a library of 3200 compounds, to be a likely pharmacophore. Particularly when you're not even specifying where it's substituted and how. There are all kinds of imidazole out there, and they do all kinds of things.
four%20imidazoles.png
So just how many imidazoles are in the library, and how many caused this particular signature? I think I've found them all. Shown at left are the four imidazoles (and there are only four) that exhibit the activity shown in Figure 2 (ergosterol depletion / effects on membrane). Note that all four of them are known antifungals - which makes sense, given that the compounds were chosen for the their ability to inhibit the growth of yeast, and topical antifungals will indeed do that for you. And that phenotype is exactly what you'd expect from miconazole, et al., because that's their known mechanism of action: they mess up the synthesis of ergosterol, which is an essential part of the fungal cell membrane. It would be quite worrisome if these compounds didn't show up under that heading. (Note that miconazole is on the list twice).
other%20imidazoles.png
But note that there are nine other imidazoles that don't have that same response signature at all - and I didn't even count the benzimidazoles, and there are many, although from that structure in Figure 2, who's to say that they shouldn't be included? What I'm saying here is that imidazole by itself is not enough. A majority of the imidazoles in this screen actually don't get binned this way. You shouldn't look at a compound's structure, see that it has an imidazole, and then decide by looking at Figure 2 that it's therefore probably going to deplete ergosterol and lead to membrane effects. (Keep in mind that those membrane effects probably aren't going to show up in mammalian cells, anyway, since we don't use ergosterol that way).

There are other imidazole-containing antifungals on the list that are not marked down for "ergosterol depletion / effects on membrane". Ketonconazole is SGTC_217 and 1066, and one of those runs gets this designation, while the other one gets signature 118. Both bifonazole and sertaconazole also inhibit the production of ergosterol - although, to be fair, bifonazole does it by a different mechanism. It gets annotated as Response Signature 19, one of the minor ones, while sertaconazole gets marked down for "plasma membrane distress". That's OK, though, because it's known to have a direct effect on fungal membranes separate from its ergosterol-depleting one, so it's believable that it ends up in a different category. But there are plenty of other antifungals on this list, some containing imidazoles and some containing triazoles, whose mechanism of action is also known to be ergosterol depletion. Fluconazole, for example, is SGTC_227, 1787 and 1788, and that's how it works. But its signature is listed as "Iron homeostasis" once and "azole and statin" twice. Itraconzole is SGTC_1076, and it's also annotated as Response Signature 19. Voriconazole is SGTC_1084, and it's down as "azole and statin". Climbazole is SGTC_2777, and it's marked as "iron homeostasis" as well. This scattering of known drugs between different categories is possibly and indicator of this screen's ability to differentiate them, or possibly an indicator of its inherent limitations.

Now we get to another big problem, the imidazolium at the bottom of Figure 2. It is, as I said on Friday, completely nuts to assign a protonated imidazole to a different category than a nonprotonated one. Note that several of the imidazole-containing compounds mentioned above are already protonated salts - they, in fact, fit the imidazolium structure drawn, rather than the imidazole one that they're assigned to. This mistake alone makes Figure 2 very problematic indeed. If the paper was, in fact, talking about protonated imidazoles (which, again, is what the authors have drawn) it would be enough to immediately call into question the whole thing, because a protonated imidazole is the same as a regular imidazole when you put it into a buffered system. In fact, if you go through the list, you find that what they're actually talking about are N-alkylimidazoliums, so the structure at the bottom of FIgure 2 is wrong, and misleading. There are two compounds on the list with this signature, in case you were wondering, but the annotation may well be accurate, because some long-chain alkylimidazolium compounds (such as ionic liquid components) are already known to cause mitochondrial depolarization.

But there are several other alkylimidazolium compounds in the set (which is a bit odd, since they're not exactly drug-like). And they're not assigned to the mitochondrial distress phenotype, as Figure 2 would have you think. SGTC_1247, 179, 193, 1991, 327, and 547 all have this moeity, and they scatter between several other categories. Once again, a majority of compounds with the Figure 2 substructure don't actually map to the phenotype shown (while plenty of other structural types do). What use, exactly, is Figure 2 supposed to be?

Let's turn to some other structures in it. The impossible/implausible ones, as mentioned above, turn out to be that way because they're supposed to have substituents on them. But look around - adamantane is on there. To put it as kindly as possible, adamantane itself is not much of a pharmacophore, having nothing going for it but an odd size and shape for grease. Tetrahydrofuran (THF) is on there, too, and similar objections apply. When attempts have been made to rank the sorts of functional groups that are likely to interact with protein binding sites, ethers always come out poorly. THF by itself is not some sort of key structural unit; highlighting it as one here is, for a medicinal chemist, distinctly weird.

What's also weird is when I search for THF-containing compounds that show this activity signature, I can't find much. The only things with a THF ring in them seem to be SGTC_2563 (the complex natural product tomatine) and SGTC_3239, and neither one of them is marked with the signature shown. There are some imbedded THF rings as in the other structural fragments shown (the succinimide-derived Diels-Alder ones), but no other THFs - and as mentioned, it's truly unlikely that the ether is the key thing about these compounds, anyway. If anyone finds another THF compound annotated for tubulin folding, I'll correct this post immediately, but for now, I can't seem to track one down, even though Table S4 says that there are 65 of them. Again, what exactly is Figure 2 supposed to be telling anyone?

Now we come to some even larger concerns. The supplementary material for the paper says that 95% of the compounds on the list are "drug-like" and were filtered by the commercial suppliers to eliminate reactive compounds. They do caution that different people have different cutoffs for this sort of thing, and boy, do they ever. There are many, many compounds in this collection that I would not have bothered putting into a cell assay, for fear of hitting too many things and generating uninterpretable data. Quinone methides are a good example - as mentioned before, they're in this set. Rhodanines and similar scaffolds are well represented, and are well known to hit all over the place. Some of these things are tested at hundreds of micromolar.

I recognize that one aim of a study like this is to stress the cells by any means necessary and see what happens, but even with that in mind, I think fewer nasty compounds could have been used, and might have given cleaner data. The curves seen in the supplementary data are often, well, ugly. See the comments section from the Friday post on that, but I would be wary of interpreting many of them myself.
insolubles.png
There's another problem with these compounds, which might very well have also led to the nastiness of the assay curves. As mentioned on Friday, how can anyone expect many of these compounds to actually be soluble at the levels shown? I've shown a selection of them here; I could go on. I just don't see any way that these compounds can be realistically assayed at these levels. Visual inspection of the wells would surely show cloudy gunk all over the place. Again, how are such assays to be interpreted?

And one final point, although it's a big one. Compound purity. Anyone who's ever ordered three thousand compounds from commercial and public collections will know, will be absolutely certain that they will not all be what they say on the label. There will be many colors and consistencies, and LC/MS checks will show many peaks for some of these. There's no way around it; that's how it is when you buy compounds. I can find no evidence in the paper or its supplementary files that any compound purity assays were undertaken at any point. This is not just bad procedure; this is something that would have caused me to reject the paper all by itself had I refereed it. This is yet another sign that no one who's used to dealing with medicinal chemistry worked on this project. No one with any experience would just bung in three thousand compounds like this and report the results as if they're all real. The hits in an assay like this, by the way, are likely to be enriched in crap, making this more of an issue than ever.

Damn it, I hate to be so hard on so many people who did so much work. But wasn't there a chemist anywhere in the room at any point?

Comments (39) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | The Scientific Literature

March 28, 2014

A Huntington's Breakthrough?

Email This Entry

Posted by Derek

Huntington's is a terrible disease. It's the perfect example of how genomics can only take you so far. We've known since 1993 what the gene is that's mutated in the disease, and we know the protein that it codes for (Huntingtin). We even know what seems to be wrong with the protein - it has a repeating chain of glutamines on one end. If your tail of glutamines is less than about 35 repeats, then you're not going to get the disease. If you have 36 to 39 repeats, you are in trouble, and may very well come down with the less severe end of Huntington's. If there are 40 or more, doubt is tragically removed.

So we can tell, with great precision, if someone is going to come down with Huntington's, but we can't do a damn thing about it. That's because despite a great deal of work, we don't really understand the molecular mechanism at work. This mutated gene codes for this defective protein, but we don't know what it is about that protein that causes particular regions of the brain to deteriorate. No one knows what all of Huntingtin's functions are, and not for lack of trying, and multiple attempts to map out its interactions (and determine how they're altered by a too-long N-terminal glutamine tail) have not given a definite answer.

But maybe, as of this week, that's changed. Solomon Snyder's group at Johns Hopkins has a paper out in Nature that suggests an actual mechanism. They believe that mutant Huntingtin binds (inappropriately) a transcription factor called "specificity protein 1", which is known to be a major player in neurons. Among other things, it's responsible for initiating transcription of the gene for an enzyme called cystathionine γ-lyase. That, in turn, is responsible for the last step in cysteine biosynthesis, and put together, all this suggests a brain-specific depletion of cysteine. Update: this could have numerous downstream consequences - this is the pathway that produces hydrogen sulfide, which the Snyder group has shown is an important neurotransmitter (one of several they've discovered), and it's also involved in synthesizing glutathione. Cysteine itself is, of course, often a crucial amino acid in many protein structures as well.)

Snyder is proposing this as the actual mechanism of Huntington's, and they have shown, in human tissue culture and in mouse models of the disease, that supplementation with extra cysteine can stop or reverse the cellular signs of the disease. This is a very plausible theory (it seems to me), and the paper makes a very strong case for it. It should lead to immediate consequences in the clinic, and in the labs researching possible therapies for the disease. And one hopes that it will lead to immediate consequences for Huntington's patients themselves. If I knew someone with the Huntingtin mutation, I believe that I would tell them to waste no time taking cysteine supplements, in the hopes that some of it will reach the brain.

Comments (20) + TrackBacks (0) | Category: Biological News | The Central Nervous System

March 27, 2014

Another Target Validation Effort

Email This Entry

Posted by Derek

Here's another target validation initiative, with GSK, the EMBL, and the Sanger Institute joining forces. It's the Centre for Therapeutic Target Validation (CCTV):

CTTV scientists will combine their expertise to explore and interpret large volumes of data from genomics, proteomics, chemistry and disease biology. The new approach will complement existing methods of target validation, including analysis of published research on known biological processes, preclinical animal modelling and studying disease epidemiology. . .

This new collaboration draws on the diverse, specialised skills from scientific institutes and the pharmaceutical industry. Scientists from the Wellcome Trust Sanger Institute will contribute their unique understanding of the role of genetics in health and disease and EMBL-EBI, a global leader in the analysis and dissemination of biological data, will provide bioinformatics-led insights on the data and use its capabilities to integrate huge streams of different varieties of experimental data. GSK will contribute expertise in disease biology, translational medicine and drug discovery.

That's about as much detail as one could expect for now. It's hard to tell what sorts of targets they'll be working on, and by "what sorts" I mean what disease areas, what stage of knowledge, what provenance, and everything else. But the press release goes on to say that the information gathered by this effort will be open to the rest of the scientific community, which I applaud, and that should give us a chance to look under the hood a bit.

It's hard for me to say anything bad about such an effort, other than wishing it done on a larger scale. I was about to say "other than wishing it ten times larger", but I think I'd rather have nine other independent efforts set up than making this one huge, for several reasons. Quis validet ipsos validares, if that's a Latin verb and I haven't mangled it: Who will validate the validators? There's enough trickiness and uncertainty in this stuff for plenty more people to join in.

Comments (11) + TrackBacks (0) | Category: Biological News | Drug Assays

March 24, 2014

Google's Big Data Flu Flop

Email This Entry

Posted by Derek

Some of you may remember the "Google Flu" effort, where the company was going to try to track outbreaks of influenza in the US by mining Google queries. There was never much clarification about what terms, exactly, they were going to flag as being indicative of someone coming down with the flu, but the hype (or hope) at the time was pretty strong:

Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. . .

So how'd that work out? Not so well. Despite a 2011 paper that seemed to suggest things were going well, the 2013 epidemic wrong-footed the Google Flu Trends (GFT) algorithms pretty thoroughly.

This article in Science finds that the real-world predictive power has been pretty unimpressive. And the reasons behind this failure are not hard to understand, nor were they hard to predict. Anyone who's ever worked with clinical trial data will see this one coming:

The initial version of GFT was a particularly problematic marriage of big and small data. Essentially, the methodology was to find the best matches among 50 million search terms to fit 1152 data points. The odds of finding search terms that match the propensity of the flu but are structurally unrelated, and so do not predict the future, were quite high. GFT developers, in fact, report weeding out seasonal search terms unrelated to the flu but strongly correlated to the CDC data, such as those regarding high school basketball. This should have been a warning that the big data were overfitting the small number of cases—a standard concern in data analysis. This ad hoc method of throwing out peculiar search terms failed when GFT completely missed the nonseasonal 2009 influenza A–H1N1 pandemic.

The Science authors have a larger point to make as well:

“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis. Elsewhere, we have asserted that there are enormous scientific possibilities in big data. However, quantity of data does not mean that one can ignore foundational issues of measurement and construct validity and reliability and dependencies among data. The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.

The quality of the data matters very, very, much, and quantity is no substitute. You can make a very large and complex structure out of toothpicks and scraps of wood, because those units are well-defined and solid. You cannot do the same with a pile of cotton balls and dryer lint, not even if you have an entire warehouse full of the stuff. If the individual data points are squishy, adding more of them will not fix your analysis problem; it will make it worse.

Since 2011, GFT has missed (almost invariably on the high side) for 108 out of 111 weeks. As the authors show, even low-tech extrapolation from three-week-lagging CDC data would have done a better job. But then, the CDC data are a lot closer to being real numbers. Something to think about next time someone's trying to sell you on a BIg Data project. Only trust the big data when the little data are trustworthy in turn.

Update: a glass-half-full response in the comments.

Comments (18) + TrackBacks (0) | Category: Biological News | Clinical Trials | Infectious Diseases

March 20, 2014

Years Worth of the Stuff

Email This Entry

Posted by Derek

bAP15.pngThis time last year I mentioned a particularly disturbing-looking compound, sold commercially as a so-called "selective inhibitor" of two deubiquitinase enzymes. Now, I have a fairly open mind about chemical structures, but that thing is horrible, and if it's really selective for just those two proteins, then I'm off to truck-driving school just like Mom always wanted.

Here's an enlightening look through the literature at this whole class of compound, which has appeared again and again. The trail seems to go back to this 2001 paper in Biochemistry. By 2003, you see similar motifs showing up as putative anticancer agents in cell assays, and in 2006 the scaffold above makes its appearance in all its terrible glory.

The problem is, as Jonathan Baell points out in that HTSpains.com post, that this series has apparently never really had a proper look at its SAR, or at its selectivity. It wanders through a series of publications full of on-again off-again cellular readouts, with a few tenuous conclusions drawn about its structure - and those are discarded or forgotten by the time the next paper comes around. As Baell puts it:

The dispiriting thing is that with or without critical analysis, this compound is almost certainly likely to end up with vendors as a “useful tool”, as they all do. Further, there will be dozens if not hundreds of papers out there where entirely analogous critical analyses of paper trails are possible.

The bottom line: people still don’t realize how easy it is to get a biological readout. The more subversive a compound, the more likely this is. True tools and most interesting compounds usually require a lot more medicinal chemistry and are often left behind or remain undiscovered.

Amen to that. There is way too much of this sort of thing in the med-chem literature already. I'm a big proponent of phenotypic screening, but setting up a good one is harder than setting up a good HTS, and working up the data from one is much harder than working up the data from an in vitro assay. The crazier or more reactive your "hit" seems to be, the more suspicious you should be.

The usual reply to that objection is "Tool compound!" But the standards for a tool compound, one used to investigate new biology and cellular pathways, are higher than usual. How are you going to unravel a biochemical puzzle if you're hitting nine different things, eight of which you're totally unaware of? Or skewing your assay readouts by some other effect entirely? This sort of thing happens all the time.

I can't help but think about such things when I read about a project like this one, where IBM's Watson software is going to be used to look at sequences from glioblastoma patients. That's going to be tough, but I think it's worth a look, and the Watson program seems to be just the correlation-searcher for the job. But the first thing they did was feed in piles of biochemical pathway data from the literature, and the problem is, a not insignificant proportion of that data is wrong. Statements like these are worrisome:

Over time, Watson will develop its own sense of what sources it looks at are consistently reliable. . .if the team decides to, it can start adding the full text of articles and branch out to other information sources. Between the known pathways and the scientific literature, however, IBM seems to think that Watson has a good grip on what typically goes on inside cells.

Maybe Watson can tell the rest of us, then. Because I don't know of anyone actually doing cell biology who feels that way, not if they're being honest with themselves. I wish the New York Genome Center and IBM luck in this, and I still think it's a worthwhile thing to at least try. But my guess is that it's going to be a humbling experience. Even if all the literature were correct in every detail, I think it would be one. And the literature is not correct in every detail. It has compounds like that one at the top of the entry in it, and people seem to think that they can draw conclusions from them.

Comments (18) + TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays | The Scientific Literature

March 12, 2014

Stem Cell Shakedown Cruise

Email This Entry

Posted by Derek

OK, now that recent stem cell report is really in trouble. One of the main authors, Teruhiko Wakayama, is saying that the papers should be withdrawn. Here's NHK:

Wakayama told NHK he is no longer sure the STAP cells were actually created. He was in charge of important experiments to check the pluripotency of the cells.

He said a change in a specific gene is key proof that the cells are created. He said team members were told before they released the papers that the gene had changed.

Last week, RIKEN disclosed detailed procedures for making STAP cells after outside experts failed to replicate the results outlined in the Nature article.
Wakayama pointed out that in the newly released procedures, RIKEN says this change didn't take place.

He said he reviewed test data submitted to the team's internal meetings and found multiple serious problems, such as questionable images.

These are the sorts of things that really should be ironed out before you make a gigantic scientific splash, you'd think. But I can understand how these things happen, too - a big important result, a groundbreaking discovery, and you think that someone else is probably bound to find the same thing within a month. Within a week. So you'd better publish as fast as you can, unless you feel like being a footnote when the history gets written and the prizes get handed out. There are a few details that need to be filled in? That's OK - just i-dotting and t-crossing, that stuff will be OK. The important thing is the get the discovery out to the world.

But that stuff comes back to bite you, big-time. Andrew Wiles was able to fix his proof of Fermat's Last Theorem post-announcement, but (a) that problem was non-obvious (he didn't know it was there at first), and (b) biology ain't math. Cellular systems are flaky, fluky, and dependent on a lot of variables, some of which you might not even be aware of. An amazing result in an area as tricky as stem cell generation needs a lot of shaking down, and it seems that this one has gotten it. Well, it's getting it now.

Comments (13) + TrackBacks (0) | Category: Biological News

February 27, 2014

A Close Look at Receptor Signaling

Email This Entry

Posted by Derek

azo%20kainate.png
ligand%20binding.png
Ah, the good old central nervous system, and its good old receptors. Especially the good old ion channels - there's an area with enough tricky details built into it to keep us all busy for another few decades. Here's a good illustration, in a new paper from Nature Chemical Biology. The authors, from Berkeley, are looking at the ionotropic glutamate receptors, an important (and brainbendingly complex) group. These are the NMDA, AMPA, and kainate receptors, if you name them by their prototype ligands, and they're assembled as tetramers from mix-and-match subunit proteins, providing a variety of species even before you start talking about splice variants and the like. This paper used a couple of the simpler kainate systems as a proving ground.

They're working with azobenzene-linked compounds that can be photoisomerized, and using that property as a switch. Engineering a Cys residue close to the binding pocket lets them swivel the compound in and out (as shown), and this gives them a chance to see how many of the four individual subunits need to be occupied, and what the states of the receptor are along the way. (The ligand does nothing when it's not tethered to the protein). The diagram shows the possible occupancy states, and the colored-in version shows what they found for receptor activation.

You apparently need two ligands just to get anything to happen (and this is consistent with previous work on these systems). Three ligands buys you more signaling, and the four peaks things out. Patch-clamp studies had already shown that these things are apparently capable of stepwise signaling, and this work nails that down ingeniously. Presumably this whole tetramer setup has been under selection to take advantage of that property, and you'd have to assume that the NMDA and AMPA receptors (extremely common ones, by the way) are behaving similarly. The diagram shows the whole matrix of what seems to be going on.

Comments (19) + TrackBacks (0) | Category: Biological News

February 21, 2014

Ces3 (Ces1) Inhibition As a Drug Target

Email This Entry

Posted by Derek

Update: the nomenclature of these enzymes is messy - see the comments.

Here's another activity-based proteomics result that I've been meaning to link to - in this one, the Cravatt group strengthens the case for carboxylesterase 3 as a potential target for metabolic disease. From what I can see, that enzyme was first identified back in about 2004, one of who-knows-how-many others that have similar mechanisms and can hydrolyze who-knows-how-many esters and ester-like substrates. Picking your way through all those things from first principles would be a nightmare - thus the activity-based approach, where you look for interesting phenotypes and work backwards.

In this case, they were measuring adipocyte behavior, specifically differentiation and lipid accumulation. A preliminary screen suggested that there were a lot of serine hydrolase enzymes active in these cells, and a screen with around 150 structurally diverse carbamates gave several showing phenotypic changes. The next step in the process is to figure out what particular enzymes are responsible, which can be done by fluorescence labeling (since the carbamates are making covalent bonds in the enzyme active sites. They found my old friend hormone-sensitive lipase, as well they should, but there was another enzyme that wasn't so easy to identify.
WWL113.png
One particular carbamate, the unlovely but useful WWL113, was reasonably selective for the enzyme of interest, which turned out to be the abovementioned carboxyesterase 3 (Ces3). The urea analog (which should be inactive) did indeed show no cellular readouts, and the carbamate itself was checked for other activities (such as whether it was a PPAR ligand). These established a strong connection between the inhibitor, the enzyme, and the phenotypic effects.

With that in hand, they went on to find a nicer-looking compound with even better selectivity, WWL229. (I have to say, going back to my radio-geek days in the 1970s and early 1980s, that I can't see the letters "WWL" without hearing Dixieland jazz, but that's probably not the effect the authors are looking for). Using an alkyne derivative of this compound as a probe, it appeared to label only the esterase of interest across the entire adipocyte proteome. Interestingly, though, it appears that WWL13 was more active in vivo (perhaps due to pharmacokinetic reasons?)
WWL229.png
And those in vivo studies in mice showed that Ces3 inhibition had a number of beneficial effects on tissue and blood markers of metabolic syndrome - glucose tolerance, lipid profiles, etc. Histologically, the most striking effect was the clearance of adipose deposits from the liver (a beneficial effect indeed, and one that a number of drug companies are interested in). This recapitulates genetic modification studies in rodents targeting this enzyme, and shows that pharmacological inhibition could do the job. And while I'm willing to bet that the authors would rather have discovered a completely new enzyme target, this is solid work all by itself.

Comments (14) + TrackBacks (0) | Category: Biological News | Chemical Biology | Diabetes and Obesity

February 18, 2014

Not Again - Stem Cell Results in Trouble?

Email This Entry

Posted by Derek

Oh, @#$!. That was my first comment when I saw this story. That extraordinary recent work on creating stem cells by subjected normal cells to acid stress is being investigated:

The RIKEN centre in Kobe announced on Friday that it is looking into alleged irregularities in the work of biologist Haruko Obokata, who works at the institution. She shot to fame last month as the lead author on two papers published in Nature that demonstrated a simple way to reprogram mature mice cells into an embryonic state by simply applying stress, such as exposure to acid or physical pressure on cell membranes. The RIKEN investigation follows allegations on blog sites about the use of duplicated images in Obokata’s papers, and numerous failed attempts to replicate her results.

PubPeer gets the credit for bringing some of the problems into the light. There are some real problems with figures in the two papers, as well as earlier ones from the same authors. These might be explicable as cimple mistakes, which is what the authors seem to be claiming, if it weren't for the fact that no one seems to be able to get the stem-cell results to reproduce. There are mitigating factors there, too - different cell lines, perhaps the lack of a truly detailed protocol from the original paper. But a paper should have enough details in it to be reproduced, shouldn't it?

Someone on Twitter was trying to tell me the other day that the whole reproducibility issue was being blown out of proportion. I don't think so. The one thing we seem to be able to reproduce is trouble.

Update: a list of the weirdest things (so far) about this whole business.

Comments (25) + TrackBacks (0) | Category: Biological News | The Scientific Literature

February 14, 2014

"It Is Not Hard to Peddle Incoherent Math to Biologists"

Email This Entry

Posted by Derek

Here's a nasty fight going on in molecular biology/bioinformatics. Lior Pachter of Berkeley describes some severe objections he has to published work from the lab of Manolis Kellis at MIT. (His two previous posts on these issues are here and here). I'm going to use a phrase that Pachter hears too often and say that I don't have the math to address those two earlier posts. But the latest one wraps things up in a form that everyone can understand. After describing what does look like a severe error in one of the Manolis group's conference presentations, which Pachter included in a review of the work, he says that:

. . .(they) spun the bad news they had received as “resulting from combinatorial connectivity patterns prevalent in larger network structures.” They then added that “…this combinatorial clustering effect brings into question the current definition of network motif” and proposed that “additional statistics…might well be suited to identify larger meaningful networks.” This is a lot like someone claiming to discover a bacteria whose DNA is arsenic-based and upon being told by others that the “discovery” is incorrect – in fact, that very bacteria seeks out phosphorous – responding that this is “really helpful” and that it “raises lots of new interesting open questions” about how arsenate gets into cells. Chutzpah. When you discover your work is flawed, the correct response is to retract it.

I don’t think people read papers very carefully. . .

He goes on to say:

I have to admit that after the Grochow-Kellis paper I was a bit skeptical of Kellis’ work. Not because of the paper itself (everyone makes mistakes), but because of the way he responded to my review. So a year and a half ago, when Manolis Kellis published a paper in an area I care about and am involved in, I may have had a negative prior. The paper was Luke Ward and Manolis Kellis “Evidence for Abundant and Purifying Selection in Humans for Recently Acquired Regulatory Functions”, Science 337 (2012) . Having been involved with the ENCODE pilot, where I contributed to the multiple alignment sub-project, I was curious what comparative genomics insights the full-scale $130 million dollar project revealed. The press releases accompanying the Ward-Kellis paper (e.g. The Nature of Man, The Economist) were suggesting that Ward and Kellis had figured out what makes a human a human; my curiosity was understandably piqued.

But a closer look at the paper, Pachter says, especially a dig into the supplementary material (always a recommended move) shows that the conclusions of the paper were based on what he terms "blatant statistically invalid cherry picking". See, I told you this was a fight. He also accuses Kellis of several other totally unacceptable actions in his published work, the sorts of things that cannot be brushed off as differences in interpretations or methods. He's talking fraud. And he has a larger point about how something like this might persist in the computational biology field (emphasis added):

Manolis Kellis’ behavior is part of a systemic problem in computational biology. The cross-fertilization of ideas between mathematics, statistics, computer science and biology is both an opportunity and a danger. It is not hard to peddle incoherent math to biologists, many of whom are literally math phobic. For example, a number of responses I’ve received to the Feizi et al. blog post have started with comments such as

“I don’t have the expertise to judge the math, …”

Similarly, it isn’t hard to fool mathematicians into believing biological fables. Many mathematicians throughout the country were recently convinced by Jonathan Rothberg to donate samples of their DNA so that they might find out “what makes them a genius”. Such mathematicians, and their colleagues in computer science and statistics, take at face value statements such as “we have figured out what makes a human human”. In the midst of such confusion, it is easy for an enterprising “computational person” to take advantage of the situation, and Kellis has.

You can peddle incoherent math to medicinal chemists, too, if you feel the urge. We don't use much of it day-to-day, although we've internalized more than we tend to realize. But if someone really wants to sell me on some bogus graph theory or topology, they'll almost certainly be able to manage it. I'd at least give them the benefit of the doubt, because I don't have the expertise to call them on it. Were I so minded, I could probably sell them some pretty shaky organic chemistry and pharmacokinetics.

But I am not so minded. Science is large, and we have to be able to trust each other. I could sit down and get myself up to speed on topology (say), if I had to, but the effort required would probably be better spent doing something else. (I'm not ruling out doing math recreationally, just for work). None of us can simultaneously be experts across all our specialities. So if this really is a case of publishing junk because, hey, who'll catch on, right, then it really needs to be dealt with.

If Pachter is off base, though, then he's in for a rough ride of his own. Looking over his posts, my money's on him and not Kellis, but we'll all have a chance to find out. After this very public calling out, there's no other outcome.

Comments (32) + TrackBacks (0) | Category: Biological News | In Silico | The Dark Side | The Scientific Literature

February 10, 2014

A Timeline from Cell

Email This Entry

Posted by Derek

Here's a very interesting feature from Cell - an interactive timeline on the journal's 40th anniversary, highlighting some of the key papers it's published over the years. This installment takes us up into the early 1980s. When you see the 1979 paper that brings the news that tyrosine groups on proteins actually get phosphorylated post-translation, the 1982 discovery of Ras as involved in human cancer cells, or another 1982 paper showing that telomeres have these weird repeating units on them, you realize how young the sciences molecular and cell biology really are.

Comments (3) + TrackBacks (0) | Category: Biological News | The Scientific Literature

February 7, 2014

Irisin and Metabolism - A New Target Emerges

Email This Entry

Posted by Derek

Here's something for metabolic disease people to think about: there's a report adding to what we know about the hormone irisin, secreted from muscle tissue, that causes some depots of white adipose tissue to become more like energy-burning brown fat. In the late 1990s, there were efforts all across the drug industry to find beta-3 adrenoceptor agonists to stimulate brown fat for weight loss and dyslipidemia. None of them ever made it through, and thus the arguments about whether they would actually perform as thought were never really settled. One of the points of contention was how much responsive brown adipose tissue adults had available, but I don't recall anything suspecting that it could be induced. In recent years, though, it's become clear that a number of factors can bring on what's been called "beige fat".

Irisin seems to be released in response to exercise, and is just upstream of the important transcriptional regulator PGC-1a. In fact, release of irisin might be the key to a lot of the beneficial effects of exercise, which would be very much worth knowing. In this study, a stabilized version of it, given iv to rodents, had very strong effects on body weight and glucose tolerance, just the sort of thing a lot of people could use.

One of the very interesting features of this area, from a drug discovery standpoint, is that no one has identified the irisin receptor just yet. Look for headlines on that one pretty soon, though - you can bet that a lot of people are chasing it as we speak.

Update: are human missing out on this, compared to mice and other species?

Comments (14) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity

February 5, 2014

The Evidence Piles Up: Antioxidant Supplements Are Bad For You

Email This Entry

Posted by Derek

You may remember a study that suggested that antioxidant supplement actually negated the effects of exercise in muscle tissue. (The reactive oxygen species generated are apparently being used by the cells as a signaling mechanism, one that you don't necessarily want to turn off). That was followed by another paper that showed that cells that should be undergoing apoptosis (programmed cell death) could be kept alive by antioxidant treatment. Some might read that and not realize what a bad idea that is - having cells that ignore apoptosis signals is believed to be a common feature in carcinogenesis, and it's not something that you want to promote lightly.

Here are two recent publications that back up these conclusions. The BBC reports on this paper from the Journal of Physiology. It looks like a well-run trial demonstrating that antioxidant therapy (Vitamin C and Vitamin E) does indeed keep muscles from showing adaptation to endurance training. The vitamin-supplemented group reached the same performance levels as the placebo group over the 11-week program, but on a cellular level, they did not show the (beneficial) changes in mitochondria, etc. The authors conclude:

Consequently, vitamin C and E supplementation hampered cellular adaptions in the exercised muscles, and although this was not translated to the performance tests applied in this study, we advocate caution when considering antioxidant supplementation combined with endurance exercise.

Then there's this report in The Scientist, covering this paper in Science Translational Medicine. The title says it all: "Antioxidants Accelerate Lung Cancer Progression in Mice". In this case, it looks like reactive oxygen species should normally be activating p53, but taking antioxidants disrupts this signaling and allows early-stage tumor cells (before their p53 mutates) to grow much more quickly.

So in short, James Watson appears to be right when he says that reactive oxygen species are your friends. This is all rather frustrating when you consider the nonstop advertising for antioxidant supplements and foods, especially for any role in preventing cancer. It looks more and more as if high levels of extra antioxidants can actually give people cancer, or at the very least, help along any cancerous cells that might arise on their own. Evidence for this has been piling up for years now from multiple sources, but if you wander through a grocery or drug store, you'd never have the faintest idea that there could be anything wrong with scarfing up all the antioxidants you possibly can.

The supplement industry pounces on far less compelling data to sell its products. But here are clear indications that a large part of their business is actually harmful, and nothing is heard except the distant sound of crickets. Or maybe those are cash registers. Even the wildly credulous Dr. Oz reversed course and did a program last year on the possibility that antioxidant supplements might be doing more harm than good, although he still seems to be pitching "good" ones versus "bad". Every other pronouncement from that show is immediately bannered all over the health food aisles - what happened to this one?

This shouldn't be taken as a recommendation to go out of the way to avoid taking in antioxidants from food. But going out of your way to add lots of extra Vitamin C, Vitamin E, N-acetylcysteine, etc., to your diet? More and more, that really looks like a bad idea.

Update: from the comments, here's a look at human mortality data, strongly suggesting no benefit whatsoever from antioxidant supplementation (and quite possibly harm from beta-carotene, Vitamin A, and Vitamin E),

Comments (32) + TrackBacks (0) | Category: Biological News | Cancer

February 3, 2014

The Return of Gene Therapy (And More)

Email This Entry

Posted by Derek

The advent of such techniques as CRISPR has people thinking again about gene therapy, and no wonder. This has always been a dream of molecular medicine - you could wipe all sorts of rare diseases off the board by going in and fixing their known genetic defects. Actually doing that, though, has been extremely difficult (and dangerous, since patients have died in the attempt).

But here's a report of embryonic gene modification in cynomologous monkeys, and if it works in cynos, it's very likely indeed to work in humans. In vitro fertilization plus CRISPR/Cas9 - neither of these, for better or worse, are all that hard to do, and my guess is that we're very close to seeing someone try this - probably not in the US at first, but there are plenty of other jurisdictions. There's a somewhat disturbing angle, though: I don't see much cause (or humanly acceptable cause) for generating gene-knockout human beings, which is what this technique would most easily provide. And for fixing genetic defects, well, you'd have to know that the single-cell embryo actually has the defect, and unless both parents are homozygous, you're not going to be sure (can't sequence the only cell you have, can you?) So the next easiest thing is to add copies of some gene you find desirable, and that will take us quickly into uneasy territory.

A less disturbing route might be to see if the technique can be used to gene-edit the egg and sperm cells before fertilization. Then you've got the possibility of editing germ cell lines in vivo, which really would wipe these diseases out of humanity (except for random mutations), but that will be another one of those hold-your-breath steps, I'd think. It's only a short step from fixing what's wrong to enhancing what's already there - it all depends on where you slide the scale to define "wrong". More fast-twitch muscle fibers, maybe? Restore the ability to make your own vitamin C? Switch the kid's lipoproteins to ApoA1 Milano?

For a real look into the future, combine this with last week's startling report of the generation of stem cells by applying stress to normal tissue samples. This work seems quite solid, and there are apparently anecdotal reports (see the end of this transcript) of some of it being reproduced already. If so, we would appear to be vaulting into a new world of tissue engineering, or at least a new world of being able to find out what's really hard about tissue engineering. ("Just think - horrible, head-scratching experimental tangles that were previously beyond our reach can finally be. . .")

Now have a look at this news about a startup called Editas. They're not saying what techniques they're going to use (my guess is some proprietary variant of CRISPR). But whatever they have, they're going for the brass ring:

(Editas has) ambitious plans to create an entirely new class of drugs based on what it calls “gene editing.” The idea is similar, yet different, from gene therapy: Editas’ goal is to essentially target disorders caused by a singular genetic defect, and using a proprietary in-house technology, create a drug that can “edit” out the abnormality so that it becomes a normal, functional gene—potentially, in a single treatment. . .

. . .Editas, in theory, could use this system to create a drug that could cure any number of genetic diseases via a one-time fix, and be more flexible than gene therapy or other techniques used to cure a disease on the genetic level. But even so, the challenges, just like gene therapy, are significant. Editas has to figure out a way to safely and effectively deliver a gene-editing drug into the body, something Bitterman acknowledges is one of the big hills the company has to climb.

This is all very exciting stuff. But personally, I don't do gene editing, being an organic chemist and a small-molecule therapeutics guy. So what does all this progress mean for someone like me (or for the companies that employ people like me?) Well, for one thing, it is foretelling the eventual doom of the what we can call the Genzyme model, treating rare metabolic disorders with few patients but high cost-per-patient. A lot of companies are targeting (or trying to target) that space these days, and no wonder. Their business model is still going to be safe for some years, but honestly, I'd have to think that eventually someone is going to get this gene-editing thing to work. You'd have to assume that it will be harder than it looks; most everything is harder than it looks. And regulatory agencies are not going to be at their speediest when it comes to setting up trials for this kind of thing. But a lot of people with a lot of intelligence, a lot of persistence, and an awful lot of money are going after this, and I have to think that someone is going to succeed. Gene editing, Moderna's mRNA work - we're going to rewrite the genome to suit ourselves, and sooner than later. The reward will be treatments that previous eras would have had to ascribe to divine intervention, a huge step forward in Francis Bacon's program of "the effecting of all things possible".

The result will also be a lot of Schumpeterian "creative destruction" as some existing business models dissolve. And that's fine - I think that business models should always be subject to that selection pressure. As a minor side benefit, these therapies might finally (but probably won't) shut up the legion of people who go on about how drug companies aren't interested in cures, just endlessly profitable treatments. It never seems to occur to them that cures are hard, nor that someone might actually come along with one.

Comments (19) + TrackBacks (0) | Category: Biological News

January 28, 2014

Antivirals: "I Love the Deviousness of It All"

Email This Entry

Posted by Derek

Here's a look at some very interesting research on HIV (and a repurposed compound) that I was unable to comment on here. As for the first line of that post, well, I doubt it, but I like to think of myself as rich in spirit. Or something.

Comments (11) + TrackBacks (0) | Category: Biological News | Infectious Diseases

January 14, 2014

Trouble With Stapled Peptides? A Strong Rebuttal.

Email This Entry

Posted by Derek

Here's a good paper on the design of stapled peptides, with an emphasis on what's been learned about making them cell-penetrant. It's also a specific rebuttal to a paper from Genentech (the Okamoto one referenced below) detailing problems with earlier reported stapled peptides:

In order to maximize the potential for success in designing stapled peptides for basic research and therapeutic development, a series of important considerations must be kept in mind to avoid potential pitfalls. For example, Okamoto et al. recently reported in ACS Chemical Biology that a hydrocarbon-stapled BIM BH3 peptide (BIM SAHB) manifests neither improved binding activity nor cellular penetrance compared to an unmodified BIM BH3 peptide and thereby caution that peptide stapling does not necessarily enhance affinity or biological activity. These negative results underscore an important point about peptide stapling: insertion of any one staple at any one position into any one peptide to address any one target provides no guarantee of stapling success. In this particular case, it is also noteworthy that the Walter and Eliza Hall Institute (WEHI) and Genentech co-authors based their conclusions on a construct that we previously reported was weakened by design to accomplish a specialized NMR study of a transient ligand−protein interaction and was not used in cellular studies because of its relatively low α-helicity, weak binding activity, overall negative charge, and diminished cellular penetrance. Thus, the Okamoto et al. report provides an opportunity to reinforce key learnings regarding the design and application of stapled peptides, and the biochemical and biological activities of discrete BIM SAHB peptides.

You may be able to detect the sound of teeth gritting together in that paragraph. The authors (Loren Walensky of Dana-Farber, and colleagues from Dana-Farber, Albert Einstein, Chicago, and Yale), point out that the Genentech paper took a peptide that's about 21% helical, and used a staple modification that took it up to about 39% helical, which they say is not enough to guarantee anything. They also note that when you apply this technique, you're necessarily altering two amino acids at a minimum (to make them "stapleable"), as well as adding a new piece across the surface of the peptide helix, so these changes have to be taken into account when you compare binding profiles. Some binding partners may be unaffected, some may be enhanced, and some may be wiped out.

It's the Genentech team's report of poor cellular uptake that you can tell is the most irritating feature of their paper to these authors, and from the way they make their points, you can see why:

The authors then applied this BIM SAHBA (aa 145−164) construct in cellular studies and observed no biological activity, leading to the conclusion that “BimSAHB is not inherently cell-permeable”. However, before applying stapled peptides in cellular studies, it is very important to directly measure cellular uptake of fluorophore-labeled SAHBs by a series of approaches, including FACS analysis, confocal microscopy, and fluorescence scan of electrophoresed lysates from treated cells, as we previously reported. Indeed, we did not use the BIM SAHBA (aa 145−164) peptide in cellular studies, specifically because it has relatively low α-helicity, weakened binding activity, and overall negative charge (−2), all of which combine to make this particular BIM SAHB construct a poor candidate for probing cellular activity. As indicated in our 2008 Methods in Enzymology review, “anionic species may require sequence modification (e.g., point mutagenesis, sequence shift) to dispense with negative charge”, a strategy that emerged from our earliest studies in 2004 and 2007 to optimize the cellular penetrance of stapled BID BH3 and p53 peptides for cellular and in vivo analyses and also was applied in our 2010 study involving stapled peptides modeled after the MCL-1 BH3 domain. In our 2011 Current Protocols in Chemical Biology article, we emphasized that “based on our evaluation of many series of stapled peptides, we have observed that their propensity to be taken up by cells derives from a combination of factors, including charge, hydrophobicity, and α-helical structure, with negatively charged and less structured constructs typically requiring modification to achieve cell penetrance. . .

They go on to agree with the Genentech group that the peptide they studied has poor uptake into cells, but the tell-us-something-we-don't-know tone comes through pretty clearly, I'd say. The paper goes on to detail several other publications where these authors worked out the behavior of BIM BH3 stapled peptides, saying that "By assembling our published documentation of the explicit sequence compositions of BIM SAHBs and their distinct properties and scientific applications, as also summarized in Figure 1, we hope to resolve any confusion generated by the Okamoto et al. study".

They do note that the Genentech (Okamoto) paper did use one of their optimized peptides in a supplementary experiment, which shows that they were aware of the different possibilities. That one was apparently showed no effects on the viability of mouse fibroblasts, but this new paper says that a closer look (at either their own studies or at the published literature) would have shown them that the cells were actually taking up the peptide, but were relatively resistant to its effects, which actually helps establish something of a therapeutic window.

This is a pretty sharp response, and it'll be interesting to see if the Genentech group has anything to add in their defense. Overall, the impression is that stapled peptides can indeed work, and do have potential as therapeutic agents (and are in the clinic being tested as such), but that they need careful study along the way to make sure of their properties, their pharmacokinetics, and their selectivity. Just as small molecules do, when you get down to it.

Comments (6) + TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology

January 13, 2014

Boost Your NAD And Fix It All?

Email This Entry

Posted by Derek

Here's a paper from a few weeks back that I missed during the holidays: work from the Sinclair labs at Harvard showing a new connection between SIRT1 and aging, this time through a mechanism that no one had appreciated. I'll appreciate, in turn, that that opening sentence is likely to divide its readers into those who will read on and those who will see the words "SIRT1" or "Sinclair" and immediate seek their entertainment elsewhere. I feel for you, but this does look like an interesting paper, and it'll be worthwhile to see what comes of it.

Here's the Harvard press release, which is fairly detailed, in case you don't have access to Cell. The mechanism they're proposing is that as NAD+ levels decline with age, this affects SIRT1 function to the point that it no longer constains HIF-1. Higher levels of HIF-1, in turn, disrupt pathways between the nucleus and the mitochondia, leading to lower levels of mitochondria-derived proteins, impaired energy generation, and cellular signs of aging.

Very interestingly, these effects were reversed (on a cellular/biomarker level) by one-week treatment of aging mice with NMN (nicotine mononucleotide edit: fixed typo), a precursor to NAD. That's kind of a brute-force approach to the problem, but a team from Washington U. recently showed extremely similar effects in aging diabetic rodents supplemented with NMN, done for exactly the same NAD-deficiency reasons. I would guess that the NMN is flying off the shelves down at the supplement stores, although personally I'll wait for some more in vivo work before I start taking it with my orange juice in the mornings.

Now, whatever you think of sirtuins (and of Sinclair's work with them), this work is definitely not crazy talk. Mitochondria function has long been a good place to look for cellular-level aging, and HIF-1 is an interesting connection as well. As many readers will know, that acronym stands for "hypoxia inducible factor" - the protein was originally seen to be upregulated when cells were put under low-oxygen stress. It's a key regulatory switch for a number of metabolic pathways under those conditions, but there's no obvious reason for it to be getting more active just because you're getting older. Some readers may have encountered it as an oncology target - there are a number of tumors that show abnormal HIF activity. That makes sense, on two levels - the interiors of solid tumors are notoriously oxygen-poor, so that would at least be understandable, but switching on HIF under normal conditions is also bad news. It promotes glycolysis as a metabolic pathway, and stimulates growth factors for angiogenesis. Both of those are fine responses for a normal cell that needs more oxygen, but they're also the behavior of a cancer cell showing unrestrained growth. (And those cells have their tradeoffs, too, such as a possible switch between metastasis and angiogenesis, which might also have a role for HIF).

There's long been speculation about a tradeoff between aging and cellular prevention of carcinogenicity. In this case, though, we might have a mechanism where our interests on on the same side: overactive HIF (under non-hypoxic conditions) might be a feature of both cancer cells and "normally" aging ones. I put that word in quotes because (as an arrogant upstart human) I'm not yet prepared to grant that the processes of aging that we undergo are the ones that we have to undergo. My guess is that there's been very little selection pressure on lifespan, and that what we've been dealt is the usual evolutionary hand of cards: it's a system that works well enough to perpetuate the species and beyond that who cares?

Well, we care. Biochemistry is a wonderful, heartbreakingly intricate system whose details we've nowhere near unraveled, and we often mess it up when we try to do anything to it, anyway. But part of what makes us human is the desire (and now the ability) to mess around with things like this when we think we can benefit. Not looking at the mechanisms of aging seems to me like not looking at the mechanisms of, say, diabetes, or like letting yourself die of a bacterial infection when you could take an antibiotic. Just how arrogant that attitude is, I'm not sure yet. I think we'll eventually get the chance to find out. All this recent NAD work suggests that we might get that chance sooner than later. Me, I'm 51. Speed the plow.

Comments (17) + TrackBacks (0) | Category: Aging and Lifespan | Biological News | Diabetes and Obesity

December 4, 2013

Cancer Cell Line Assays: You Won't Like Hearing This

Email This Entry

Posted by Derek

Here's some work that gets right to the heart of modern drug discovery: how are we supposed to deal with the variety of patients we're trying to treat? And the variety in the diseases themselves? And how does that correlate with our models of disease?

This new paper, a collaboration between eight institutions in the US and Europe, is itself a look at two other recent large efforts. One of these, the Cancer Genome Project, tested 138 anticancer drugs against 727 cell lines. Its authors said at the time (last year) that "By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies". The other study, the Cancer Cell Line Encyclopedia, tested 24 drugs against 1,036 cell lines. That one appeared at about the same time, and its authors said ". . .our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of ‘personalized’ therapeutic regimens."

Well, will they? As the latest paper shows, the two earlier efforts overlap to the extent of 15 drugs, 471 cell lines, 64 genes and the expression of 12,153 genes. How well do they match up? Unfortunately, the answer is "Not too well at all". The discrepancies really come out in the drug sensitivity data. The authors tried controlling for all the variables they could think of - cell line origins, dosing protocols, assay readout technologies, methods of estimating IC50s (and/or AUCs), specific mechanistic pathways, and so on. Nothing really helped. The two studies were internally consistent, but their cross-correlation was relentlessly poor.

It gets worse. The authors tried the same sort of analysis on several drugs and cell lines themselves, and couldn't match their own data to either of the published studies. Their take on the situation:

Our analysis of these three large-scale pharmacogenomic studies points to a fundamental problem in assessment of pharmacological drug response. Although gene expression analysis has long been seen as a source of ‘noisy’ data, extensive work has led to standardized approaches to data collection and analysis and the development of robust platforms for measuring expression levels. This standardization has led to substantially higher quality, more reproducible expression data sets, and this is evident in the CCLE and CGP data where we found excellent correlation between expression profiles in cell lines profiled in both studies.

The poor correlation between drug response phenotypes is troubling and may represent a lack of standardization in experimental assays and data analysis methods. However, there may be other factors driving the discrepancy. As reported by the CGP, there was only a fair correlation (rs < 0.6) between camptothecin IC50 measurements generated at two sites using matched cell line collections and identical experimental protocols. Although this might lead to speculation that the cell lines could be the source of the observed phenotypic differences, this is highly unlikely as the gene expression profiles are well correlated between studies.

Although our analysis has been limited to common cell lines and drugs between studies, it is not unreasonable to assume that the measured pharmacogenomic response for other drugs and cell lines assayed are also questionable. Ultimately, the poor correlation in these published studies presents an obstacle to using the associated resources to build or validate predictive models of drug response. Because there is no clear concordance, predictive models of response developed using data from one study are almost guaranteed to fail when validated on data from another study, and there is no way with available data to determine which study is more accurate. This suggests that users of both data sets should be cautious in their interpretation of results derived from their analyses.

"Cautious" is one way to put it. These are the sorts of testing platforms that drug companies are using to sort out their early-stage compounds and projects, and very large amounts of time and money are riding on those decisions. What if they're gibberish? A number of warning sirens have gone off in the whole biomarker field over the last few years, and this one should be so loud that it can't be ignored. We have a lot of issues to sort out in our cell assays, and I'd advise anyone who thinks that their own data are totally solid to devote some serious thought to the possibility that they're wrong.

Here's a Nature News summary of the paper, if you don't have access. It notes that the authors of the two original studies don't necessarily agree that they conflict! I wonder if that's as much a psychological response as a statistical one. . .

Comments (21) + TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays

November 20, 2013

Fred Sanger, 1918-2013

Email This Entry

Posted by Derek

Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.

When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.

In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.

Comments (22) + TrackBacks (0) | Category: Biological News | Chemical News

November 12, 2013

It Doesn't Repeat? Who's Interested?

Email This Entry

Posted by Derek

Nature Biotechnology is making it known that they're open to publishing studies with negative results. The occasion is their publication of this paper, which is an attempt to replicate the results of this work, published last year in Cell Research. The original paper, from Chen-Yu Zhang of Nanjing University, reported that micro-RNAs (miRNAs) from ingested plants could be taken up into the circulation of rodents, and (more specifically) that miRNA168a from rice could actually go on to modulate gene expression in the animals themselves. This was a very interesting (and controversial) result, with a lot of implications for human nutrition and for the use of transgenic crops, and it got a lot of press at the time.

But other researchers in the field were not buying these results, and this new paper (from miRagen Therapeutics and Monsanto) reports that they cannot replicated the Nanjing work at all. Here's their rationale for doing the repeat:

The naturally occurring RNA interference (RNAi) response has been extensively reported after feeding double-stranded RNA (dsRNA) in some invertebrates, such as the model organism Caenorhabditis elegans and some agricultural pests (e.g., corn rootworm and cotton bollworm). Yet, despite responsiveness to ingested dsRNA, a recent survey revealed substantial variation in sensitivity to dsRNA in other Caenorhabditis nematodes and other invertebrate species. In addition, despite major efforts in academic and pharmaceutical laboratories to activate the RNA silencing pathway in response to ingested RNA, the phenomenon had not been reported in mammals until a recent publication by Zhang et al. in Cell Research. This report described the uptake of plant-derived microRNAs (miRNA) into the serum, liver and a few other tissues in mice following consumption of rice, as well as apparent gene regulatory activity in the liver. The observation provided a potentially groundbreaking new possibility that RNA-based therapies could be delivered to mammals through oral administration and at the same time opened a discussion on the evolutionary impact of environmental dietary nucleic acid effects across broad phylogenies. A recently reported survey of a large number of animal small RNA datasets from public sources has not revealed evidence for any major plant-derived miRNA accumulation in animal samples. Given the number of questions evoked by these analyses, the limited success with oral RNA delivery for pharmaceutical development, the history of safe consumption for dietary small RNAs and lack of evidence for uptake of plant-derived dietary small RNAs, we felt further evaluation of miRNA uptake and the potential for cross-kingdom gene regulation in animals was warranted to assess the prevalence, impact and robustness of the phenomenon.

They believe that the expression changes that the original team noted in their rodents were due to the dietary changes, not to the presence of rice miRNAs, which they say that they cannot detect. Now, at this point, I'm going to exit the particulars of this debate. I can imagine that there will be a lot of hand-waving and finger-pointing, not least because these latest results come partly from Monsanto. You have only to mention that company's name to an anti-GMO activist, in my experience, to induce a shouting fit, and it's a real puzzle why saying "DeKalb" or "Pioneer Hi-Bred" doesn't do the same. But it's Monsanto who take the heat. Still, here we have a scientific challenge, which can presumably be answered by scientific means: does rice miRNA get into the circulation and have an effect, or not?

What I wanted to highlight, though, is another question that might have occurred to anyone reading the above. Why isn't this new paper in Cell Research, if they published the original one? Well, the authors apparently tried them, only to find their work rejected because (as they were told) "it is a bit hard to publish a paper of which the results are largely negative". That is a silly response, verging on the stupid. The essence of science is reproducibility, and if some potentially important result can't be replicated, then people need to know about it. The original paper had very big implications, and so does this one.

Note that although Cell Research is published out of Shanghai, it's part of the Nature group of journals. If two titles under the same publisher can't work something like this out, what hope is there for the rest of the literature? Congratulations to Nature Biotechnology, though, for being willing to publish, and for explicitly stating that they are open to replication studies of important work. Someone should be.

Comments (20) + TrackBacks (0) | Category: Biological News | The Scientific Literature

October 31, 2013

Rewriting History at the Smithsonian?

Email This Entry

Posted by Derek

Laura Helmuth has a provocative piece up at Slate with the title "Watch Francis Collins Lunge For a Nobel Prize". She points out that the NIH and the Smithsonian are making a big deal out of celebrating the "10th anniversary of the sequencing of the human genome", even though many people seem to recall the big deal being in 2001 - not 2003. Yep, that was when the huge papers came out in Science and Nature with all the charts and foldouts, and the big press conferences and headlines. February of 2001.

So why the "tenth anniversary" stuff this year? Well, 2003 is the year that the NIH team published its more complete version of the genome. That's the anniversary they've chosen to remember. If you start making a big deal out of 2001, you have to start making a big deal out of the race between that group and the Celera group - and you start having to, you know, share credit. Now, I make no claims for Craig Venter's personality or style. But I don't see how it can be denied that he and his group vastly sped up the sequencing of the genome, and arrived at a similar result in far less time than the NIH consortium. The two drafts of the sequence were published simultaneously, even though there seems to have been a lot of elbow-throwing by the NIH folks to keep that from happening.

The NIH has been hosting anniversary events all year, but the most galling anniversary claim is made in an exhibit that opened this year at the Smithsonian’s National Museum of Natural History, the second-most-visited museum in the world. (Dang that Louvre.) It’s called “Genome: Unlocking Life’s Code,” and the promotional materials claim, “It took nearly a decade, three billion dollars, and thousands of scientists to sequence the human genome in 2003.” (Disclosure: I worked for Smithsonian magazine while the exhibition, produced in partnership with the NIH, was being planned, and I consulted very informally with the curators. That is, we had lunch and I warned them they were being played.) To be clear, I’m delighted that the Smithsonian has an exhibit on the human genome. And I’m a huge fan of the NIH. (To its credit, the NIH did host an anniversary symposium in 2011.) But the Smithsonian exhibit enshrines the 2003 date in the country’s museum of record and minimizes the great drama and triumph of 2001.

Celebrating 2003 rather than 2001 as the most important date in the sequencing of the human genome is like celebrating the anniversary of the final Apollo mission rather than the first one to land on the moon. . .

No one is well served by pretending that things happened otherwise, or that 2003 is somehow the date of the "real" human genome. The race was on to publish in 2001, and the headlines were in 2001, and all the proclamations that the genome had at least been sequenced were in February of 2001. If, from some perspectives, that makes for a messier story, oh well. If we stripped all the messy stories out of the history books, what would be left?

Update: Matthew Herper has more on this. He's not as down on the NIH as Helmuth is, but he has some history lessons of his own.

Comments (18) + TrackBacks (0) | Category: Biological News

October 23, 2013

Allosteric Binding Illuminated?

Email This Entry

Posted by Derek

G-protein coupled receptors are one of those areas that I used to think I understood, until I understood them better. These things are very far from being on/off light switches mounted in drywall - they have a lot of different signaling mechanisms, and none of them are simple, either.

One of those that's been known for a long time, but remains quite murky, is allosteric modulation. There are many compounds known that clearly are not binding at the actual ligand site in some types of GPCR, but (equally clearly) can affect their signaling by binding to them somewhere else. So receptors have allosteric sites - but what do they do? And what ligands naturally bind to them (if any)? And by what mechanism does that binding modulate the downstream signaling, and are there effects that we can take advantage of as medicinal chemists? Open questions, all of them.

There's a new paper in Nature that tries to make sense of this, and trying by what might be the most difficult way possible: through computational modeling. Not all that long ago, this might well have been a fool's errand. But we're learning a lot about the details of GPCR structure from the recent X-ray work, and we're also able to handle a lot more computational load than we used to. That's particularly true if we are David Shaw and the D. E. Shaw company, part of the not-all-that-roomy Venn diagram intersection of quantitative Wall Street traders and computational chemists. Shaw has the resources to put together some serious hardware and software, and a team of people to make sure that the processing units get frequent exercise.
shaw%20allosteric.jpg
They're looking at the muscarinic M2 receptor, an old friend of mine for which I produced I-know-not-how-many antagonist candidates about twenty years ago. The allosteric region is up near the surface of the receptor, about 15A from the acetylcholine binding site, and it looks like all the compounds that bind up there do so via cation/pi interactions with aromatic residues in the protein. (That holds true for compounds as diverse as gallamine, alcuronium, and strychnine), and the one shown in the figure. This is very much in line with SAR and mutagenesis results over the years, but there are some key differences. Many people had thought that the aromatic groups of the ligands the receptors must have been interacting, but this doesn't seem to be the case. There also don't seem to be any interactions between the positively charged parts of the ligands and anionic residues on nearby loops of the protein (which is a rationale I remember from my days in the muscarinic field).

The simulations suggest that the two sites are very much in communication with each other. The width and conformation of the extracellular vestibule space can change according to what allosteric ligand occupies it, and this affects whether the effect on regular ligand binding is positive or negative, and to what degree. There can also, in some cases, be direct electrostatic interactions between the two ligands, for the larger allosteric compounds. I was very glad to see that the Shaw group's simulations suggested some experiments: one set with modified ligands, which would be predicted to affect the receptor in defined ways, and another set with point mutations in the receptor, which would be predicted to change the activities of the known ligands. These experiments were carried out by co-authors at Monash University in Australia, and (gratifyingly) seem to confirm the model. Too many computational papers (and to be fair, too many non-computational papers) don't get quite to the "We made some predictions and put our ideas to the test" stage, and I'm glad this one does.

Comments (14) + TrackBacks (0) | Category: Biological News | In Silico | The Central Nervous System

October 7, 2013

The 2013 Medicine/Physiology Nobel: Traffic

Email This Entry

Posted by Derek

This year's Medicine Nobel is one that's been anticipated for some time. James Rothman of Yale, Randy W. Schekman of Berkeley, and Thomas C. Südhof of Stanford are cited for their fundamental discoveries in vesicular trafficking, and I can't imagine anyone complaining that it wasn't deserved. (The only controversy would be thanks, once again, to the "Rule of Three" in Alfred Nobel's will. Richard Scheller of Genentech has won prizes with Südhof and with Scheller for his work in the same field).
vesicles.png
Here's the Nobel Foundation's scientific summary, and as usual, it's a good one. Vesicles are membrane-enclosed bubbles that bud off from cellular compartments and transport cargo to other parts of the cell (or outside it entirely), where they merge with another membrane and release their contents. There's a lot of cellular machinery involved on both the sending and receiving end, and that's what this year's winners worked out.

As it turns out, there are specific proteins (such as the SNAREs) imbedded in intracellular membranes that work as an addressing system: "tie up the membrane around this point and send the resulting globule on its way", or "stick here and start the membrane fusion process". This sort of thing is going on constantly inside the cell, and the up-to-the-surface-and-out variation is particularly noticeably in neurons, since they're constantly secreting neurotransmitters into the synapse. That latter process turned out to be very closely tied to signals like local calcium levels, which gives it the ability to be turned on and off quickly.

As the Nobel summary shows, a lot of solid cell biology had to be done to unravel all this. Scheckman looked for yeast cells that showed obvious mutations in their vesicle transport and tracked down what proteins had been altered. Rothman started off with a viral infection system that produced a lot of an easily-trackable protein, and once he'd identified others that helped to move it around, he used these as affinity reagents to find what bound to them in turn. This work dovetailed very neatly with the proteins that Scheckman's lab had identified, and suggested (as you'd figure) that this machinery was conserved across many living systems. Südhof then extended this work into the neurotransmitter area, discovering the proteins involved in the timing signals that are so critical in those cells, and demonstrating their function by generating mouse knockout models along the way.

The importance of all these processes to living systems can't be overstated. Eukaryotic cells have to be compartmentalized to function; there's too much going on for everything to be in the same stew pot all at the same time. So a system for "mailing" materials between those regions is vital. And in the same way, cells have to communicate with others, releasing packets of signaling molecules under very tight supervision, and that's done through many of the same mechanisms. You can trace the history of our understanding of these things through years of Nobel awards, and there will surely be more.

Comments (15) + TrackBacks (0) | Category: Biological News | General Scientific News

September 18, 2013

The Arguing Over PTC124 and Duchenne Muscular Dystrophy

Email This Entry

Posted by Derek

Does it matter how a drug works, if it works? PTC Therapeutics seems bent on giving everyone an answer to that question, because there sure seem to be a lot of questions about how ataluren (PTC124), their Duchenne Muscular Dystrophy (DMD) therapy, acts. This article at Nature Biotechnology does an excellent job explaining the details.

Premature "stop" codons in the DNA of DMD patients, particularly in the dystrophin gene, are widely thought to be one of the underlying problems in the disease. (The same mechanism is believed to operate in many other genetic-mutation-driven conditions as well. Ataluren is supposed to promote "read-through" of these to allow the needed protein to be produced anyway. That's not a crazy idea at all - there's been a lot of thought about ways to do that, and several aminoglycoside antibiotics have been shown to work through that mechanism. Of that class, gentamicin has been given several tries in the clinic, to ambiguous effect so far.

So screening for a better enhancer of stop codon read-through seems like it's worth a shot for a disease with so few therapeutic options. PTC did this using a firefly luciferase (Fluc) reporter assay. As with any assay, there are plenty of opportunities to get false positives and false negatives. Firefly luciferase, as a readout, suffers from instability under some conditions. And if its signal is going to wink out on its own, then a compound that stabilizes it will look like a hit in your assay system. Unfortunately, there's no particular market in humans for a compound that just stabilizes firefly luciferase.

That's where the argument is with ataluren. Papers have appeared from a team at the NIH detailing trouble with the FLuc readout. That second paper (open access) goes into great detail about the mechanism, and it's an interesting one. FLuc apparently catalyzes a reaction between PTC124 and ATP, to give a new mixed anhydride adduct that is a powerful inhibitor of the enzyme. The enzyme's normal mechanism involves a reaction between luciferin and ATP, and since luciferin actually looks like something you'd get in a discount small-molecule screening collection, you have to be alert to something like this happening. The inhibitor-FLuc complex keeps the enzyme from degrading, but the new PTC124-derived inhibitor itself is degraded by Coenzyme A - which is present in the assay mixture, too. The end result is more luciferase signal that you expect versus the controls, which looks like a hit from your reporter gene system - but isn't. PTC's scientists have replied to some of these criticisms here.

Just to add more logs to the fire, other groups have reported that PTC124 seems to be effective in restoring read-through for similar nonsense mutations in other genes entirely. But now there's another new paper, this one from a different group at Dundee, claiming that ataluren fails to work through its putative mechanism under a variety of conditions, which would seem to call these results into question as well. Gentamicin works for them, but not PTC124. Here's the new paper's take-away:

In 2007 a drug was developed called PTC124 (latterly known as Ataluren), which was reported to help the ribosome skip over the premature stop, restore production of functional protein, and thereby potentially treat these genetic diseases. In 2009, however, questions were raised about the initial discovery of this drug; PTC124 was shown to interfere with the assay used in its discovery in a way that might be mistaken for genuine activity. As doubts regarding PTC124's efficacy remain unresolved, here we conducted a thorough and systematic investigation of the proposed mechanism of action of PTC124 in a wide array of cell-based assays. We found no evidence of such translational read-through activity for PTC124, suggesting that its development may indeed have been a consequence of the choice of assay used in the drug discovery process.

Now this is a mess, and it's complicated still more by the not-so-impressive performance of PTC124 in the clinic. Here's the Nature Biotechnology article's summary:

In 2008, PTC secured an upfront payment of $100 million from Genzyme (now part of Paris-based Sanofi) in return for rights to the product outside the US and Canada. But the deal was terminated following lackluster data from a phase 2b trial in DMD. Subsequently, a phase 3 trial in cystic fibrosis also failed to reach statistical significance. Because the drug showed signs of efficacy in each indication, however, PTC pressed ahead. A phase 3 trial in DMD is now underway, and a second phase 3 trial in cystic fibrosis will commence shortly.

It should be noted that the read-through drug space has other players in it as well. Prosensa/GSK and Sarepta are in the clinic with competing antisense oligonucleotides targeting a particular exon/mutation combination, although this would probably taken them into other subpopulations of DMD patients than PTC is looking to treat.

If they were to see real efficacy, PTC could have the last laugh here. To get back to the first paragraph of this post, if a compound works, well, the big argument has just been won. The company has in vivo data to show that some gene function is being restored, as well they should (you don't advance a compound to the clinic just on the basis of in vitro assay numbers, no matter how they look). It could be that the compound is a false positive in the original assay but manages to work through some other mechanism, although no one knows what that might be.

But as you can see, opinion is very much divided about whether PTC124 works at all in the real clinical world. If it doesn't, then the various groups detailing trouble with the early assays will have a good case that this compound never should have gotten as far as it did.

Comments (26) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Assays | Drug Development

September 6, 2013

More on Warp Drive Bio and Cryptic Natural Products

Email This Entry

Posted by Derek

At C&E News, Lisa Jarvis has an excellent writeup on Warp Drive Bio and the whole idea of "cryptic natural products" (last blogged on here). As the piece makes clear, not everyone even is buying into the idea that there's a lot of useful-but-little-expressed natural product chemical matter out there, but since there could be, I'm glad that someone's looking.

Yet not everyone looked at the abundant gene clusters and saw a sea of drug candidates. The biosynthetic pathways defined by these genes are turned off most of the time. That inactivity caused skeptics to wonder how genome miners could be so sure they carried the recipes for medicinally important molecules.

Researchers pursuing genomics-based natural products say the answer lies in evolution and the environment. “These pathways are huge,” says Gregory L. Challis, a professor of chemical biology at the University of Warwick, in Coventry, England. With secondary metabolites encoded by as many as 150 kilobases of DNA, a bacterium would have to expend enormous amounts of energy to make each one.

Because they use so much energy, these pathways are turned on only when absolutely necessary. Traditional “grind and find” natural products discovery means taking bacteria out of their natural habitat—the complex communities where they communicate and compete for resources—and growing each strain in isolation. In this artificial setting, bacteria have no reason to expend energy to make anything other than what they need to survive.

“I absolutely, firmly believe that these compounds have a strong role to play in the environment in which these organisms live,” says Challis, who also continues to pursue traditional approaches to natural products. “Of course, not all bioactivities will be relevant to human medicine and agriculture, but many of them will be.”

The article also mentions that Novartis is working in this area, which I hadn't realized, as well as a couple of nonprofit groups. If there's something there, at any kind of reasonable hit rate, presumably one of these teams will find it?

Comments (7) + TrackBacks (0) | Category: Biological News | Natural Products

September 5, 2013

CRISPR Takes Off

Email This Entry

Posted by Derek

If you haven't heard of CRISPR, you must not have to mess around with gene expression. And not everyone does, true, but we sure do count on that sort of thing in biomedical research. And this is a very useful new technique to do it:

In 2007, scientists from Danisco, a Copenhagen-based food ingredient company now owned by DuPont, found a way to boost the phage defenses of this workhouse microbe. They exposed the bacterium to a phage and showed that this essentially vaccinated it against that virus (Science, 23 March 2007, p. 1650). The trick has enabled DuPont to create heartier bacterial strains for food production. It also revealed something fundamental: Bacteria have a kind of adaptive immune system, which enables them to fight off repeated attacks by specific phages.

That immune system has suddenly become important for more than food scientists and microbiologists, because of a valuable feature: It takes aim at specific DNA sequences. In January, four research teams reported harnessing the system, called CRISPR for peculiar features in the DNA of bacteria that deploy it, to target the destruction of specific genes in human cells. And in the following 8 months, various groups have used it to delete, add, activate, or suppress targeted genes in human cells, mice, rats, zebrafish, bacteria, fruit flies, yeast, nematodes, and crops, demonstrating broad utility for the technique. Biologists had recently developed several new ways to precisely manipulate genes, but CRISPR's "efficiency and ease of use trumps just about anything," says George Church of Harvard University, whose lab was among the first to show that the technique worked in human cells.

CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats, a DNA motif that turns up a lot in bacteria (and, interestingly, is almost universal in the Archaea). There are a number of genes associated with these short repeated spacers, which vary some across different types of bacteria, but all of them seem to be involved in the same sorts of processes. Some of the expressed proteins seem to work by chopping up infecting DNA sequences into chunks of about 30 base pairs, and these get inserted into the bacterial DNA near the start of the CRISPR region. RNAs get read off from them, and some of the other associated proteins are apparently there to process these RNAs into a form where they (and other associated proteins) can help to silence the corresponding DNA and RNA from an infectious agent. There are, as you can tell, still quite a few details to be worked out. Other bacteria may have some further elaborations that we haven't even come across yet. But the system appears to be widely used in nature, and quite robust.

The short palindromic repeats were first noticed back in 1987, but it wasn't until 2005 that it was appreciated that many of the sequences matched those found in bacteriophages. That was clearly no coincidence, and the natural speculation was that these bits were actually intended to be the front end for some sort of bacterial variant of RNA interference. So it has proven, and pretty rapidly, too. The Danisco team reported further results in 2007, although as that Science article points out, they now say that they didn't come close to appreciating the technique's full potential. By 2011 the details of the Cas9-based CRISPR system were becoming clear. Just last year, the key proof-of-principle work was published, showing that an engineered "guide RNA" was enough to target specific DNA sequences with excellent specificity. And in February, the Church group at Harvard published their work on a wide range of genetic targets across several human cell lines, simultaneously with another multicenter team (Harvard, Broad and Mcgovern Institutes, Columbia, Tsinghua, MIT, Columbia, Rockefeller) that reported similar results across a range of mammalian cells.

Work in this field since those far-off days of last February has done nothing but accelerate. Here's an Oxford group (and one from Wisconsin) applying CRISPR all over the Drosophia genome. Here's Church's group doing it to yeast. There are several zebrafish papers that have appeared so far this year, and here's the Whitehead/MIT folks applying it to mouse zygotes, in a technique that they've already refined. Methods for enhancing expression as well as shutting it down are already being reported as well.

So we could be looking at a lot of things here. Modifying cell lines has just gotten easier, which is good news. It looks like genetically altered rodent models could be produced much more quickly and selectively, which would be welcome, and there seems no reason not to apply this to all sorts of other model organisms as well. That takes us from the small stuff (like the fruit flies and yeast) all the way up past mice, and then, well, you have to wonder about gene therapy in humans. Unless I'm very much mistaken, people are already forming companies aiming at just this sort of thing. Outside of direct medical applications, CRISPR also looks like it's working in important plant species, leading to a much faster and cleaner way to genetically modify crops of all kinds. If this continues to work out at the pace it has already, the Nobel people will have the problem of figuring out how to award the eventual prize. Or prizes.

Comments (10) + TrackBacks (0) | Category: Biological News

August 20, 2013

GPCRs As Drug Targets: Nowhere Near Played Out

Email This Entry

Posted by Derek

Here's a paper that asks whether GPCRs are still a source of new targets. As you might guess, the answer is "Yes, indeed". (Here's a background post on this area from a few years ago, and here's my most recent look at the area).

It's been a famously productive field, but the distribution is pretty skewed:

From a total of 1479 underlying targets for the action of 1663 drugs, 109 (7%) were GPCRs or GPCR related (e.g., receptor-activity modifying proteins or RAMPs). This immediately reveals an issue: 26% of drugs target GPCRs, but they account for only 7% of the underlying targets. The results are heavily skewed by certain receptors that have far more than their “fair share” of drugs. The most commonly targeted receptors are as follows: histamine H1 (77 occurrences), α1A adrenergic (73), muscarinic M1 (72), dopamine D2 (62), muscarinic M2 (60), 5HT2a (59), α2A adrenergic (56), and muscarinic M3 (55)—notably, these are all aminergic GPCRs. Even the calculation that the available drugs exert their effects via 109 GPCR or GPCR-related targets is almost certainly an overestimate since it includes a fair proportion where there are only a very small number of active agents, and they all have a pharmacological action that is “unknown”; in truth, we have probably yet to discover an agent with a compelling activity at the target in question, let alone one with exactly the right pharmacology and appropriately tuned pharmacokinetics (PK), pharmacodynamics (PD), and selectivity to give clinical efficacy for our disease of choice. A prime example of this would be the eight metabotropic (mGluR) receptors, many of which have only been “drugged” according to this analysis due to the availability of the endogenous ligand (L-glutamic acid) as an approved nutraceutical. There are also a considerable number of targets for which the only known agents are peptides, rather than small molecules. . .

Of course, since we're dealing with cell-surface receptors, peptides (and full-sized proteins) have a better shot at becoming drugs in this space.

Of the 437 drugs found to target GPCRs, 21 are classified as “biotech” (i.e., biopharmaceuticals) with the rest as “small molecules.” However, that definition seems rather generous given that the molecular weight (MW) of the “small molecules” extends as high as 1623. Using a fairly modest threshold of MW <600 suggests that ~387 are more truly small molecules and ~50 are non–small molecules, being roughly an 80:20 split. Pursuing the 20%, while not being novel targets/mechanisms, could still provide important new oral/small-molecule medications with the comfort of excellent existing clinical validation. . .

The paper goes on to mention many other possible modes for drug action - allosteric modulators, GPCR homo- and heterodimerization, other GPCR-protein interactions, inverse agonists and the like, alternative signaling pathways other than the canonical G-proteins, and more. It's safe to say that all this will keep up busy for a long time to come, although working up reliable assays for some of these things is no small matter.

Comments (4) + TrackBacks (0) | Category: Biological News | Drug Assays

August 16, 2013

An HIV Structure Breakthrough? Or "Complete Rubbish"?

Email This Entry

Posted by Derek

Structural biology needs no introduction for people doing drug discovery. This wasn't always so. Drugs were discovered back in the days when people used to argue about whether those "receptor" thingies were real objects (as opposed to useful conceptual shorthand), and before anyone had any idea of what an enzyme's active site might look like. And even today, there are targets, and whole classes of targets, for which we can't get enough structural information to help us out much.

But when you can get it, structure can be a wonderful thing. X-ray crystallography of proteins, and protein-ligand complexes has revealed so much useful information that it's hard to know where to start. It's not the magic wand - you can't look at an empty binding site and just design something right at your desk that'll be a potent ligand right off the bat. And you can't look at a series of ligand-bound structures and say which one is the most potent, not in most situations, anyway. But you still learn things from X-ray structures that you could never have known otherwise.

It's not the only game in town, either. NMR structures are very useful, although the X-ray ones can be easier to get, especially in these days of automated synchroton beamlines and powerful number-crunching. But what if your protein doesn't crystallize? And what if there are things happening in solution that you'd never pick up on from the crystallized form? You're not going to watch your protein rearrange into a new ligand-bound conformation with X-ray crystallography, that's for sure. No, even though NMR structures can be a pain to get, and have to be carefully interpreted, they'll also show you things you'd never had seen.

And there are more exotic methods. Earlier this summer, there was a startling report of a structure of the HIV surface proteins gp120 and gp41 obtained through cryogenic electron microscopy. This is a very important and very challenging field to work in. What you've got there is a membrane-bound protein-protein interaction, which is just the sort of thing that the other major structure-determination techniques can't handle well. At the same time, though, the number of important proteins involved in this sort of thing is almost beyond listing. Cryo-EM, since it observes the native proteins in their natural environment, without tags or stains, has a lot of potential, but it's been extremely hard to get the sort of resolution with it that's needed on such targets.

Joseph Sodroski's group at Harvard, longtime workers in this area, published their 6-angstrom-resolution structure of the protein complex in PNAS. But according to this new article in Science, the work has been an absolute lightning rod ever since it appeared. Many other structural biologists think that the paper is so flawed that it never should have seen print. No, I'm not exaggerating:

Several respected HIV/AIDS researchers are wowed by the work. But others—structural biologists in particular—assert that the paper is too good to be true and is more likely fantasy than fantastic. "That paper is complete rubbish," charges Richard Henderson, an electron microscopy pioneer at the MRC Laboratory of Molecular Biology in Cambridge, U.K. "It has no redeeming features whatsoever."

. . .Most of the structural biologists and HIV/AIDS researchers Science spoke with, including several reviewers, did not want to speak on the record because of their close relations with Sodroski or fear that they'd be seen as competitors griping—and some indeed are competitors. Two main criticisms emerged. Structural biologists are convinced that Sodroski's group, for technical reasons, could not have obtained a 6-Å resolution structure with the type of microscope they used. The second concern is even more disturbing: They solved the structure of a phantom molecule, not the trimer.

Cryo-EM is an art form. You have to freeze your samples in an aqueous system, but without making ice. The crystals of normal ice formation will do unsightly things to biological samples, on both the macro and micro levels, so you have to form "vitreous ice", a glassy amorphous form of frozen water, which is odd enough that until the 1980s many people considered it impossible. Once you've got your protein particles in this matrix, though, you can't just blast away at full power with your electron beam, because that will also tear things up. You have to take a huge number of runs at lower power, and analyze them through statistical techniques. The Sodolski HIV structure, for example, is the product of 670,000 single-particle images.

But its critics say that it's also the product of wishful thinking.:

The essential problem, they contend, is that Sodroski and Mao "aligned" their trimers to lower-resolution images published before, aiming to refine what was known. This is a popular cryo-EM technique but requires convincing evidence that the particles are there in the first place and rigorous tests to ensure that any improvements are real and not the result of simply finding a spurious agreement with random noise. "They should have done lots of controls that they didn't do," (Sriram) Subramaniam asserts. In an oft-cited experiment that aligns 1000 computer-generated images of white noise to a picture of Albert Einstein sticking out his tongue, the resulting image still clearly shows the famous physicist. "You get a beautiful picture of Albert Einstein out of nothing," Henderson says. "That's exactly what Sodroski and Mao have done. They've taken a previously published structure and put atoms in and gone down into a hole." Sodroski and Mao declined to address specific criticisms about their studies.

Well, they decline to answer them in response to a news item in Science. They've indicated a willingness to take on all comers in the peer-reviewed literature, but otherwise, in print, they're doing the we-stand-by-our-results-no-comment thing. Sodroski himself, with his level of experience in the field, seems ready to defend this paper vigorously, but there seem to be plenty of others willing to attack. We'll have to see how this plays out in the coming months - I'll update as things develop.

Comments (34) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico | Infectious Diseases

August 14, 2013

Another T-Cell Advance Against Cancer

Email This Entry

Posted by Derek

The technique of using engineered T cells against cancerous cells may be about to explode ever more than it has already. One of the hardest parts of getting this process scaled up has been the need to extract each patient's own T cells and reprogram them. But in a new report in Nature Biotechnology, a team at Sloan-Kettering shows that they can raise cells of this type from stem cells, which were themselves derived from T lymphocytes from another healthy donor. As The Scientist puts it:

Sadelain’s team isolated T cells from the peripheral blood of a healthy female donor and reprogrammed them into stem cells. The researchers then used disabled retroviruses to transfer to the stem cells the gene that codes for a chimeric antigen receptor (CAR) for the antigen CD19, a protein expressed by a different type of immune cell—B cells—that can turn malignant in some types of cancer, such as leukemia. The receptor for CD19 allows the T cells to track down and kill the rogue B cells. Finally, the researchers induced the CAR-modified stem cells to re-acquire many of their original T cell properties, and then replicated the cells 1,000-fold.

“By combining the CAR technology with the iPS technology, we can make T cells that recognize X, Y, or Z,” said Sadelain. “There’s flexibility here for redirecting their specificity towards anything that you want.”

You'll note the qualifications in that extract. The cells that are produced in this manner aren't quite the same as the ones you'd get by re-engineering a person's own T-cells. We may have to call them "T-like" cells or something, but in a mouse lymphoma model, they most certainly seem to do the job that you want them to. It's going to be harder to get these to the point of trying them out in humans, since they're a new variety of cell entirely, but (on the other hand) the patients you'd try this in are not long for this world and are, in many cases, understandably willing to try whatever might work.

Time to pull the camera back a bit. It's early yet, but these engineered T-cell approaches are very impressive. This work, if it holds up, will make them a great easier to implement. No doubt, at this moment, there are Great Specific Antigen Searches underway to see what other varieties of cancer might respond to this technique. And this, remember, is not the only immunological approach that's showing promise, although it must be the most dramatic.

So. . .we have to consider a real possibility that the whole cancer-therapy landscape could be reshaped over the next decade or two. Immunology has the potential to disrupt the whole field, which is fine by me, since it could certainly use some disruption, given the state of the art. Will we look back, though, and see an era where small-molecule therapies gave people an extra month here, an extra month there, followed by one where harnessing the immune system meant sweeping many forms of cancer off the board entirely? Speed the day, I'd say - but if you're working on those small-molecule therapies, you should keep up with these developments. It's not time to consider another line of research, not yet. But the chances of having to do this, at some point, are not zero. Not any more.

Comments (20) + TrackBacks (0) | Category: Biological News | Cancer

August 1, 2013

Knockout Mice, In Detail

Email This Entry

Posted by Derek

Everyone in biomedical research is familiar with "knockout" mice, animals that have had a particular gene silenced during their development. This can be a powerful way of figuring out what that gene's product actually does, although there are always other factors at work. The biggest one is how other proteins and pathways can sometimes compensate for the loss, a process often doesn't have a chance to kick in when you come right into an adult animal and block a pathway through other means. In some other cases, a gene knockout turns out to be embryonic-lethal, but can be tolerated in an adult animal, once some key development pathway has run its course.

There have been a lot of knockout mice over the years. Targeted genetic studies have described functions for thousands of mouse genes. But when you think about it, there have surely been many of these whose phenotypes have not really been noticed or studied in the right amount of detail. Effects can be subtle, and there's an awful lot to look for. That's the motivation behind the Sanger Institute Mouse Genetics Project, who have a new paper out here. They're part of the even larger International Mouse Phenotyping Consortium, which is co-ordinating efforts like this across several sites.

Update: here's an overview of the work being done. For generating knockout animals, you have the International Knockout Mouse Consortium at an international level - the IKMC, mentioned above, is the phenotyping arm of the effort. In the US, the NIH-funded Knockout Mouse Project (KOMP) is a major effort, and in Europe you have the European Conditional Mouse Mutagenesis Program (EUCOMM), which has evolved into EUCOMMTOOLS. Then in Canada you have NorCOMM, and TIGM at Texas A&M.

I like the way that last link's abstract starts: "Nearly 10 years after the completion of the human genome project, and the report of a complete sequence of the mouse genome, it is salutary to reflect that we remain remarkably ignorant of the function of most genes in the mammalian genome." That's absolutely right, and these mouse efforts are an attempt to address that directly. The latest paper describes the viability of 489 mutants, and a more complete analysis of 250 of them - still only a tiny fraction of what's out there, but enough to give you a look behind the curtain.

29% of the mutants were lethal and 13% were subviable, producing only a fraction of the expected number of embryos. That's pretty much in line with earlier estimates, so that figure will probably hold up. As for fertility, a bit over 5% of the homozygous crosses were infertile - and in almost all cases, the trouble was in the males. (All the heterozygotes could produce offspring).

The full phenotypic analysis on the first 250 mutants is quite interesting (and can be found at the Sanger Mouse Portal site.. Most of these are genes with some known function, but 34 of them have not had anything assigned to them until now. These animals were assessed through blood chemistry, gene expression profiling, dietary and infectious disease challenges, behavioral tests, necropsy and histopathology, etc. Among the most common changes were body weight and fat/lean ratios (mostly on the underweight side), but there were many others. (That body weight observation is, in most cases, almost certainly not a primary effect. Reproductive and musculoskeletal defects were the most common categories that were likely to be front-line problems).

What stands out is that the unassigned genes seemed to produce noticeable phenotypic changes at the same rate as the known ones, and that even the studied genes turned up effects that hadn't been realized. As the paper says, these results "reveal our collective inability to predict phenotypes based on sequence or expression pattern alone." About 35% of the mutants (of all kinds) showed no detectable phenotypic changes, so these are either nonessential genes or had phenotypes that escaped the screens. The team looked at heterozygotes in cases where the homozygotes were lethal or nearly so (90 lines so far), and haploinsufficiency (problems due to only one working copy of a gene) was a common effect, seen in over 40% of those mutants.

Genes with some closely related paralog were found to be less likely to be essential, but those producing a protein known to be part of a protein complex were more likely to be so. Both of those results make sense. But a big question is how well these results will translate to understanding of human disease, and that's still an open issue. Clearly, many things will be directly applicable, but some care will be needed:

The data set reported here includes 59 orthologs of known human disease genes. We compared our data with human disease features described in OMIM. Approximately half (27) of these mutants exhibited phenotypes that were broadly consistent with the human phenotype. However, many additional phenotypes were detected in the mouse mutants suggesting additional features that might also occur in patients that have hitherto not been reported. Interestingly, a large proportion of genes underlying recessive disorders in humans are homozygous lethal in mice (17 of 37 genes), possibly because the human mutations are not as disruptive as the mouse alleles.

As this work goes on, we're going to learn a lot about mammalian genetics that has been hidden. The search for similar effects in humans will be going on simultaneously, informed by the mouse results. Doing all this is going to keep a lot of people busy for a long time - but understanding what comes out is going to be an even longer-term occupation. Something to look forward to!

Comments (14) + TrackBacks (0) | Category: Biological News

July 18, 2013

The Junk DNA Wars Get Hotter

Email This Entry

Posted by Derek

Thanks to an alert reader, I was put on to this paper in PNAS. It's from a team at Washington U. in St. Louis, and my fellow Cardinals fans are definitely stirring things up in the debate over "junk DNA" function and the ENCODE results. (The most recent post here on the debate covered the "It's functional" point of view - for links to previous posts on some vigorous ENCODE-bashing publications, see here).

This new paper, blogged about here at Homologus and here by one of its authors, Mike White, is an attempt to run a null-hypothesis experiment on transcription factor function. There are a lot of transcription factor recognition sequences in the genome. They're short DNA sequences that serve as flags for the whole transcription machinery to land and start assembling at a particular spot. Transcription factors themselves are the proteins that do the primary recognition of these sequences, and that gives them plenty to do. With so many DNA motifs out there (and so many near-misses), some of their apparent targets are important and real and some of them may well be noise. TFs have their work cut out.

What this new paper did was look at a particular transcription factor, Crx. They took a set of 1,300 sequences that are (functionally) known to bind it - 865 of them with the canonical recognition motifs and 433 of them that are known to bind, but don't have the traditional motif. They compared that set to 3,000 control sequences, including 865 of them "specifically chosen to match the Crx motif content and chromosomal distribution" as compared to that first set. They also included a set of single-point mutations of the known binding sequences, along with sets of scrambled versions of both the known binding regions and the matched controls above, with dinucleotide ratios held constant - random but similar.

What they found, first, was that the known binding elements do indeed drive transcription, as advertised, while the controls don't. But the ENCODE camp has a broader definition of function than just this, and here's where the dinucleotides hit the fan. When they looked at gene repression activity, they found that the 865 binders and the 865 matched controls (with Crx recognition elements, but in unbound regions of the genome) both showed similar amounts of activity. As the paper says, "Overall, our results show that both bound and unbound Crx motifs, removed from their genomic context, can produce repression, whereas only bound regions can strongly activate".

So far, so good, and nothing that the ENCODE people might disagree with - I mean, there you are, unbound regions of the genome showing functional behavior and all. But the problem is, most of the 1,300 random sequences also showed regulatory effects:

Our results demonstrate the importance of comparing the activity of candidate CREs (cis-regulatory elements - DBL) against distributions of control sequences, as well as the value of using multiple approaches to assess the function of CREs. Although scrambled DNA elements are unlikely to drive very strong levels of activation or repression, such sequences can produce distinct levels of enhancer activity within an intermediate range that overlaps with the activity of many functional sequences. Thus, function cannot be assessed solely by applying a threshold level of activity; additional approaches to characterize function are necessary, such as mutagenesis of TF binding sites.

In other words, to put it more bluntly than the paper does, one could generate ENCODE-like levels of functionality with nothing but random DNA. These results will not calm anyone down, but it's not time to calm down just yet. There are some important issues to be decided here - from theoretical biology all the way down to how many drug targets we can expect to have. I look forward to the responses to this work. Responses will most definitely be forthcoming.

Comments (12) + TrackBacks (0) | Category: Biological News

July 11, 2013

More From Warp Drive Bio (And Less From Aileron?)

Email This Entry

Posted by Derek

There hasn't been much news about Warp Drive Bio since their founding. And that founding was a bit of an unusual event all by itself, since the company was born with a Sanofi deal already in place (and an agreement for them to buy the company if targets were met). But now things seem to be happening. Greg Verdine, a founder, has announced that he's taking a three-year leave of absence from Harvard to become the company's CEO. They've also brought in some other big names, such as Julian Adams (Millennium/Infinity) to be on the board of directors.

The company has a very interesting research program: they're hoping to coax out cryptic natural products from bacteria and the like, molecules that aren't being found in regular screening efforts because the genes used in their biosynthetic pathways are rarely activated. Warp Drive's plan is to sequence heaps of prokaryotes, identify the biosynthesis genes, and activate them to produce rare and unusual natural products as drug candidates. (I'm reminded of this recent work on forcing fungi to produce odd products by messing with their epigenetic enzymes, although I'm not sure if that's what Warp Drive has in mind specifically). And the first part of that plan is what the company has been occupying itself with over the last few months:

“These are probably really just better molecules, and always were better,” he says. “The problems were that they took too long to discover and that one was often rediscovering the same things over and over again.”

Verdine explains the reason this happened is because many of the novel genes in the bacteria aren’t expressed, and remain “dark,” or turned off, and thus can’t be seen. By sequencing the microbes’ genetic material, however, Warp Drive can illuminate them, and find the roadmap needed to make a number of drugs.

“They’re there, hiding in plain sight,” Verdine says.

Over the past year and a half, Warp Drive has sequenced the entire genomes of more than 50,000 bacteria, most of which come from dirt. That library represents the largest collection of such data in existence, according to Verdine.

The entire genomes of 50,000 bacteria? I can well believe that this is the record. That is a lot of data, even considering that bacterial genomes don't run that large. My guess is that the rate-limiting step in all this is going to be a haystack problem. There are just so many things that one could potentially work on - how do you sort them out? Masses of funky natural product pathways (whose workings may not be transparent), producing masses of funky natural products, of unknown function: there's a lot to keep people busy here. But if there really is a dark-matter universe of natural products, it really could be worth exploring - the usual one certainly has been a good thing over the years, although (as noted above) it's been suffering from diminishing returns for a while.

But there's something else I wondered about when Warp Drive was founded: Verdine himself has been involved in founding several other companies, and there's another one going right here in Cambridge: Aileron Therapeutics, the flagship of the stapled-peptide business (an interesting and sometimes controversial field). How are they doing? They recently got their first compound through Phase I, after raising more money for that effort last year.

The thing is, I've heard from more than one person recently that all isn't well over there, that they're cutting back research. I don't know if that's the circle-the-wagons phase that many small companies go through when they're trying to take their first compound through the clinic, or a sign of something deeper. Anyone with knowledge, feel free to add it in the comments section. . .

Update: Prof. Verdine emails me to note that he's officially parted ways with Aileron since 2010, to avoid conflicts of interest with his other venture capital work. His lab has continued to investigate stapled peptides on their own, though.

Comments (14) + TrackBacks (0) | Category: Biological News | Business and Markets | Natural Products

July 1, 2013

Corroboration for ENCODE?

Email This Entry

Posted by Derek

Another cannon has gone off in the noncoding-genome wars. Here's a paper in PLOS Genetics detailing what the authors are calling Long Intergenic Noncoding RNAs (lincRNAs):

Known protein coding gene exons compose less than 3% of the human genome. The remaining 97% is largely uncharted territory, with only a small fraction characterized. The recent observation of transcription in this intergenic territory has stimulated debate about the extent of intergenic transcription and whether these intergenic RNAs are functional. Here we directly observed with a large set of RNA-seq data covering a wide array of human tissue types that the majority of the genome is indeed transcribed, corroborating recent observations by the ENCODE project. Furthermore, using de novo transcriptome assembly of this RNA-seq data, we found that intergenic regions encode far more long intergenic noncoding RNAs (lincRNAs) than previously described, helping to resolve the discrepancy between the vast amount of observed intergenic transcription and the limited number of previously known lincRNAs. In total, we identified tens of thousands of putative lincRNAs expressed at a minimum of one copy per cell, significantly expanding upon prior lincRNA annotation sets. These lincRNAs are specifically regulated and conserved rather than being the product of transcriptional noise. In addition, lincRNAs are strongly enriched for trait-associated SNPs suggesting a new mechanism by which intergenic trait-associated regions may function.

Emphasis added, because that's been one of the key points in this debate. The authors regard the ENCODE data as "firmly establishing the reality of pervasive transcription", so you know where their sympathies lie. And their results are offered up as a strong corroboration of the ENCODE work, with lincRNAs serving as the, well, missing link.

One thing I notice is that these new data strongly suggest that many of these RNAs are expressed at very low levels. The authors set cutoffs for "fragments per kilobase of transcript per million mapped reads" (FPKM), discarding everything that came out as less than 1 (roughly one copy per cell). The set of RNAs with FPKM>1 is over 50,000. If you ratchet up a bit, things drop off steeply, though. FPKM>10 knocks that down to between three and four thousand, and FPKM>30 give you 925 lincRNAs. My guess is that those are where the next phase of this debate will take place, since those expression levels get you away from the noise. But the problem is that the authors are explicitly making the case for thousands upon thousands of lincRNAs being important, and this interpretation won't be satisfied with everyone agreeing on a few hundred new transcripts. These things also seem to be very tissue-specific, so it looks like the arguing is going to get very granular indeed.

Here's a quote from the paper that sums up the two worldviews that are now fighting it out:

Almost half of all trait-associated SNPs (TASs) identified in genome-wide association studies are located in intergenic sequence while only a small portion are in protein coding gene exons. This curious observation points to an abundance of functional elements in intergenic sequence.

Or that curious observation could be telling you that there's something wrong with your genome-wide association studies. I lean towards that view, but the battles aren't over yet.

Comments (25) + TrackBacks (0) | Category: Biological News

June 17, 2013

GPCRs Are As Crazy As You Thought

Email This Entry

Posted by Derek

That's my take-away from this paper, which takes a deep look at a reconstituted beta-adrenergic receptor via fluorine NMR. There are at least four distinct states (two inactive ones, the active one, and an intermediate), and the relationships between them are different with every type of ligand that comes in. Even the ones that look similar turn out to have very different thermodynamics on their way to the active state. If you're into receptor signaling, you'll want to read this one closely - and if you're not, or not up for it, just take away the idea that the landscape is not a simple one. As you'd probably already guessed.

Note: this is a multi-institution list of authors, but it did catch my eye that David Shaw of Wall Street's D. E. Shaw does make an appearance. Good to see him keeping his hand in!

Comments (6) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico

June 13, 2013

Watching DNA Polymerase Do Its Thing

Email This Entry

Posted by Derek

Single-molecule techniques are really the way to go if you're trying to understand many types of biomolecules. But they're really difficult to realize in practice (a complaint that should be kept in context, given that many of these experiments would have sounded like science fiction not all that long ago). Here's an example of just that sort of thing: watching DNA polymerase actually, well, polymerizing DNA, one base at a time.

The authors, a mixed chemistry/physics team at UC Irvine, managed to attach the business end (the Klenow fragment) of DNA Polymerase I to a carbon nanotube (a mutated Cys residue and a maleimide on the nanotube did the trick). This give you the chance to use the carbon nanotube as a field effect transistor, with changes in the conformation of the attached protein changing the observed current. It's stuff like this, I should add, that brings home to me the fact that it really is 2013, the relative scarcity of flying cars notwithstanding.

The authors had previously used this method to study attached lysozyme molecules (PDF, free author reprint access). That second link is a good example of the sort of careful brush-clearing work that has to be done with a new system like this: how much does altering that single amino acid change the structure and function of the enzyme you're studying? How do you pick which one to mutate? Does being up against the side of a carbon nanotube change things, and how much? It's potentially a real advantage that this technique doesn't require a big fluorescent label stuck to anything, but you have to make sure that attaching your test molecule to a carbon nanotube isn't even worse.
KF%20graphic.jpg
It turns out, reasonably enough, that picking the site of attachment is very important. You want something that'll respond conformationally to the actions of the enzyme, moving charged residues around close to the nanotube, but (at the same time) it can't be so crucial and wide-ranging that the activity of the system gets killed off by having these things so close, either. In the DNA polymerase study, the enzyme was about 33% less active than wild type.

And the authors do see current variations that correlate with what should be opening and closing of the enzyme as it adds nucleotides to the growing chain. Comparing the length of the generated DNA with the FET current, it appears that the enzyme incorporates a new base at least 99.8% of the time it tries to, and the mean time for this to happen is about 0.3 milliseconds. Interestingly, A-T pair formation takes a consistently longer time than C-G does, with the rate-limiting step occurring during the open conformation of the enzyme in each case.

I look forward to more applications of this idea. There's a lot about enzymes that we don't know, and these sorts of experiments are the only way we're going to find out. At present, this technique looks to be a lot of work, but you can see it firming up before your eyes. It would be quite interesting to pick an enzyme that has several classes of inhibitor and watch what happens on this scale.

It's too bad that Arthur Kornberg, the discoverer of DNA Pol I, didn't quite live to see such an interrogation of the enzyme; he would have enjoyed it very much, I think. As an aside, that last link, with its quotes from the reviewers of the original manuscript, will cheer up anyone who's recently had what they thought was a good paper rejected by some journal. Kornberg's two papers only barely made it into JBC, but one year after a referee said "It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA", Kornberg was awarded the Nobel for just that.

Comments (5) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Scientific Literature

May 22, 2013

How Many Binding Pockets Are There?

Email This Entry

Posted by Derek

Just how many different small-molecule binding sites are there? That's the subject of this new paper in PNAS, from Jeffrey Skolnick and Mu Gao at Georgia Tech, which several people have sent along to me in the last couple of days.

This question has a lot of bearing on questions of protein evolution. The paper's intro brings up two competing hypotheses of how protein function evolved. One, the "inherent functionality model", assumes that primitive binding pockets are a necessary consequence of protein folding, and that the effects of small molecules on these (probably quite nonspecific) motifs has been honed by evolutionary pressures since then. (The wellspring of this idea is this paper from 1976, by Jensen, and this paper will give you an overview of the field). The other way it might have worked, the "acquired functionality model", would be the case if proteins tend, in their "unevolved" states, to be more spherical, in which case binding events must have been much more rare, but also much more significant. In that system, the very existence of binding pockets themselves is what's under the most evolutionary pressure.

The Skolnick paper references this work from the Hecht group at Princeton, which already provides evidence for the first model. In that paper, a set of near-random 4-helical-bundle proteins was produced in E. coli - the only patterning was a rough polar/nonpolar alternation in amino acid residues. Nonetheless, many members of this unplanned family showed real levels of binding to things like heme, and many even showed above-background levels of several types of enzymatic activity.

In this new work, Skolnick and Gao produce a computational set of artificial proteins (called the ART library in the text), made up of nothing but poly-leucine. These were modeled to the secondary structure of known proteins in the PDB, to produce natural-ish proteins (from a broad structural point of view) that have no functional side chain residues themselves. Nonetheless, they found that the small-molecule-sized pockets of the ART set actually match up quite well with those found in real proteins. But here's where my technical competence begins to run out, because I'm not sure that I understand what "match up quite well" really means here. (If you can read through this earlier paper of theirs at speed, you're doing better than I can). The current work says that "Given two input pockets, a template and a target, (our algorithm) evaluates their PS-score, which measures the similarity in their backbone geometries, side-chain orientations, and the chemical similarities between the aligned pocket-lining residues." And that's fine, but what I don't know is how well it does that. I can see poly-Leu giving you pretty standard backbone geometries and side-chain orientations (although isn't leucine a little more likely than average to form alpha-helices?), but when we start talking chemical similarities between the pocket-lining residues, well, how can that be?

But I'm even willing to go along with the main point of the paper, which is that there are not-so-many types of small-molecule binding pockets, even if I'm not so sure about their estimate of how many there are. For the record, they're guessing not many more than about 500. And while that seems low to me, it all depends on what we mean by "similar". I'm a medicinal chemist, someone who's used to seeing "magic methyl effects" where very small changes in ligand structure can make big differences in binding to a protein. And that makes me think that I could probably take a set of binding pockets that Skolnick's people would call so similar as to be basically identical, and still find small molecules that would differentiate them. In fact, that's a big part of my job.

But in general, I see the point they're making, but it's one that I've already internalized. There are a finite number of proteins in the human body. Fifty thousand? A couple of hundred thousand? Probably not a million. Not all of these have small-molecule binding sites, for sure, so there's a smaller set to deal with right there. Even if those binding sites were completely different from one another, we'd be looking at a set of binding pockets in the thousands/tens of thousands range, most likely. But they're not completely different, as any medicinal chemist knows: try to make a selective muscarinic agonist, or a really targeted serine hydrolase inhibitor, and you'll learn that lesson quickly. And anyone who's run their drug lead through a big selectivity panel has seen the sorts of off-target activities that come up: you hit someof the other members of your target's family to greater or lesser degree. You hit the flippin' sigma receptor, not that anyone knows what that means. You hit the hERG channel, and good luck to you then. Your compound is a substrate for one of the CYP enzymes, or it binds tightly to serum albumin. Who has even seen a compound that binds only to its putative target? And this is only with the counterscreens we have, which is a small subset of the things that are really out there in cells.

And that takes me to my main objection to this paper. As I say, I'm willing to stipulate, gladly, that there are only so many types of binding pockets in this world (although I think that it's more than 500). But this sort of thing is what I have a problem with:

". . .we conclude that ligand-binding promiscuity is likely an inherent feature resulting from the geometric and physical–chemical properties of proteins. This promiscuity implies that the notion of one molecule–one protein target that underlies many aspects of drug discovery is likely incorrect, a conclusion consistent with recent studies. Moreover, within a cell, a given endogenous ligand likely interacts at low levels with multiple proteins that may have different global structures.

"Many aspects of drug discovery" assume that we're only hitting one target? Come on down and try that line out in a drug company, and be prepared for rude comments. Believe me, we all know that our compounds hit other things, and we all know that we don't even know the tenth of it. This is a straw man; I don't know of anyone doing drug discovery that has ever believed anything else. Besides, there are whole fields (CNS) where polypharmacy is assumed, and even encouraged. But even when we're targeting single proteins, believe me, no one is naive enough to think that we're hitting those alone.

Other aspects of this paper, though, are fine by me. As the authors point out, this sort of thing has implications for drawing evolutionary family trees of proteins - we should not assume too much when we see similar binding pockets, since these may well have a better chance of being coincidence than we think. And there are also implications for origin-of-life studies: this work (and the other work in the field, cited above) imply that a random collection of proteins could still display a variety of functions. Whether these are good enough to start assembling a primitive living system is another question, but it may be that proteinaceous life has an easier time bootstrapping itself than we might imagine.

Comments (16) + TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It

May 15, 2013

GSK's Published Kinase Inhibitor Set

Email This Entry

Posted by Derek

Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.

Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.

The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.

So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?

Comments (22) + TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Chemical News | Drug Assays

May 13, 2013

Another Big Genome Disparity (With Bonus ENCODE Bashing)

Email This Entry

Posted by Derek

I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.

Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)

But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.

That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?

Comments (18) + TrackBacks (0) | Category: Biological News

April 25, 2013

What The Heck Does "Epigenetic" Mean, Anyway?

Email This Entry

Posted by Derek

A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.

When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.

So far, so good. But here's where, conceptually, things start going into the ditch:

Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.

Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:

Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .

. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.

Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".

What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.

I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.

Comments (17) + TrackBacks (0) | Category: Biological News

April 23, 2013

IBM And The Limits of Transferable Tech Expertise

Email This Entry

Posted by Derek

Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.

Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:

Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.

And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).

One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.

And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.

Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .

Comments (17) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History

Pseudoenzymes: Back From the Dead as Targets?

Email This Entry

Posted by Derek

There's a possible new area for drug discovery that's coming from a very unexpected source: enzymes that don't do anything. About ten years ago, when the human genome was getting its first good combing-through, one of the first enzyme categories to get the full treatment were the kinases. But about ten per cent of them, on closer inspection, seemed to lack one or more key catalytic residues, leaving them with no known way to be active. They were dubbed (with much puzzlement) "pseudokinases", with their functions, if any, unknown.

As time went on and sequences piled up, the same situation was found for a number of other enzyme categories. One family in particular, the sulfotransferases, seems to have at least half of it putative members inactivated, which doesn't make a lot of sense, because these things also seem to be under selection pressure. So they're doing something, but what?

Answer are starting to be filled in. Here's a paper from last year, on some of the possibilities, and this article from Science is an excellent survey of the field. It turns out that many of these seem to have a regulatory function, often on their enzymatically active relations. Some of these pseudoenzymes retain the ability to bind their original substrates, and those events may also have a regulatory function in their downstream protein interactions. So these things may be a whole class of drug targets that we haven't screened for - and in fact may be a set of proteins that we're already hitting with some of our ligands, but with no idea that we're doing so. I doubt if anyone in drug discovery has ever bothered counterscreening against any of them, but it looks like that should change. Update: I stand corrected. See the comment thread for more.

This illustrates a few principles worth keeping in mind: first, that if something is under selection pressure, it surely has a function, even if you can't figure out how or why. (A corollary is that if some sequence doesn't seem to be under such constraints, it probably doesn't have much of a function at all, but as those links show, this is a contentious topic). Next, we should always keep in mind that we don't really know as much about cell biology as we think we do; there are lots of surprises and overlooked things waiting for us. And finally, any of those that appear to have (or retain) small-molecule binding sites are very much worth the attention of medicinal chemists, because so many other possible targets have nothing of the kind, and are a lot harder to deal with.

Comments (8) + TrackBacks (0) | Category: Biological News

April 18, 2013

Super-Enhancers in Cell Biology: ENCODE's Revenge?

Email This Entry

Posted by Derek

I've linked to some very skeptical takes on the ENCODE project, the effort that supposedly identified 80% of our DNA sequence as functional to some degree. I should present some evidence for the other side, though, as it comes up, and some may have come up.

Two recent papers in Cell tell the story. The first proposes "super-enhancers" as regulators of gene transcription. (Here's a brief summary of both). These are clusters of known enhancer sequences, which seem to recruit piles of transcription factors, and act differently from the single-enhancer model. The authors show evidence that these are involved in cell differentiation, and could well provide one of the key systems for determining eventual cellular identity from pluripotent stem cells.

Interest in further understanding the importance of Mediator in ESCs led us to further investigate enhancers bound by the master transcription factors and Mediator in these cells. We found that much of enhancer-associated Mediator occupies exceptionally large enhancer domains and that these domains are associated with genes that play prominent roles in ESC biology. These large domains, or super-enhancers, were found to contain high levels of the key ESC transcription factors Oct4, Sox2, Nanog, Klf4, and Esrrb to stimulate higher transcriptional activity than typical enhancers and to be exceptionally sensitive to reduced levels of Mediator. Super-enhancers were found in a wide variety of differentiated cell types, again associated with key cell-type-specific genes known to play prominent roles in control of their gene expression program

On one level, this is quite interesting, because cellular differentiation is a process that we really need to know a lot more about (the medical applications are enormous). But as a medicinal chemist, this sort of news sort of makes me purse my lips, because we have enough trouble dealing with the good old fashioned transcription factors (whose complexes of proteins were already large enough, thank you). What role there might be for therapeutic intervention in these super-complexes, I couldn't say.

The second paper has more on this concept. They find that these "super-enhancers" are also important in tumor cells (which would make perfect sense), and that they tie into two other big stories in the field, the epigenetic regulator BRD4 and the multifunctional protein cMyc:

Here, we investigate how inhibition of the widely expressed transcriptional coactivator BRD4 leads to selective inhibition of the MYC oncogene in multiple myeloma (MM). BRD4 and Mediator were found to co-occupy thousands of enhancers associated with active genes. They also co-occupied a small set of exceptionally large super-enhancers associated with genes that feature prominently in MM biology, including the MYC oncogene. Treatment of MM tumor cells with the BET-bromodomain inhibitor JQ1 led to preferential loss of BRD4 at super-enhancers and consequent transcription elongation defects that preferentially impacted genes with super-enhancers, including MYC. Super-enhancers were found at key oncogenic drivers in many other tumor cells.

About 3% of the enhancers found in the multiple myeloma cell line turned out to be tenfold-larger super-enhancer complexes, which bring in about ten times as much BRD4. It's been recently discovered that small-molecule ligands for BRD4 have a large effect on the cMyc pathway, and now we may know one of the ways that happens. So that might be part of the answer to the question I posed above: how do you target these things with drugs? Find one of the proteins that it has to recruit in large numbers, and mess up its activity at a small-molecule binding site. And if these giant complexes are even more sensitive to disruptions in these key proteins than usual (as the paper hypothesizes), then so much the better.

It's fortunate that chromatin-remodeling proteins such as BRD4 are (at least in some cases) filling that role, because they have pretty well-defining binding pockets that we can target. Direct targeting of cMyc, by contrast, has been quite difficult indeed (here's a new paper with some background on what's been accomplished so far).

Now, to the level of my cell biology expertise, the evidence that these papers have looks reasonably good. I'm certainly willing to believe that there are levels of transcriptional control beyond those that we've realized so far, weary sighs of a chemist aside. But I'll be interested to see the arguments over this concept play out. For example, if these very long stretches of DNA turn out indeed to be so important, how sensitive are they to mutation? One of the key objections to the ENCODE consortium's interpretation of their data is that much of what they're calling "functional" DNA seems to have little trouble drifting along and picking up random mutations. It will be worth applying this analysis to these super-regulators, but I haven't seen that done yet.

Comments (5) + TrackBacks (0) | Category: Biological News | Cancer

March 22, 2013

Good News in Oncology: More Immune Therapy for Leukemia

Email This Entry

Posted by Derek

I've written a couple of times about the work at the University of Pennsylvania on modified T-cell therapy for leukemia (CLL). Now comes word that a different version of this approach seems to be working at Sloan-Kettering. Recurrent B-cell acute lymphoblastic leukemia (B-ALL) has been targeted there, and it's generally a more aggressive disease than CLL.

As with the Penn CLL studies, when this technique works, it can be dramatic:

One of the sickest patients in the study was David Aponte, 58, who works on a sound crew for ABC News. In November 2011, what he thought was a bad case of tennis elbow turned out to be leukemia. He braced himself for a long, grueling regimen of chemotherapy.

Brentjens suggested that before starting the drugs, Aponte might want to have some of his T-cells stored (chemotherapy would deplete them). That way, if he relapsed, he might be able to enter a study using the cells. Aponte agreed.

At first, the chemo worked, but by summer 2012, while he was still being treated, tests showed the disease was back.

“After everything I had gone through, the chemo, losing hair, the sickness, it was absolutely devastating,’’ Aponte recalled.

He joined the T-cell study. For a few days, nothing seemed to be happening. But then his temperature began to rise. He has no memory of what happened for the next week or so, but the journal article — where he is patient 5 — reports that his fever spiked to 105 degrees.

He was in the throes of a ‘‘cytokine storm,’’ meaning that the T-cells, in a furious battle with the cancer, were churning out enormous amounts of hormones called cytokines. Besides fever, the hormonal rush can make a patient’s blood pressure plummet and his heart rate shoot up. Aponte was taken to intensive care and treated with steroids to quell the reaction.

Eight days later, his leukemia was gone

He and the other patients in the study all received bone marrow transplantations after the treatment, and are considered cured - which is remarkable, since they were all relapsed/refractory, and thus basically at death's door. These stories sound like the ones from the early days of antibiotics, with the important difference that resistance to drug therapy doesn't spread through the world's population of cancer cells. The modified T-cell approach has already gotten a lot of attention, and this is surely going to speed things up even more. I look forward to the first use of it for a non-blood-cell tumor (which appears to be in the works) and to further refinements in generating the cells themselves.

Comments (11) + TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials

March 21, 2013

AstraZeneca Makes a Deal With Moderna. Wait, Who?

Email This Entry

Posted by Derek

AstraZeneca has announced another 2300 job cuts, this time in sales and administration. That's not too much of a surprise, as the cuts announced recently in R&D make it clear that the company is determined to get smaller. But their overall R&D strategy is still unclear, other than "We can't go on like this", which is clear enough.

One interesting item has just come out, though. The company has done a deal with Moderna Therapeutics of Cambridge (US), a relatively new outfit that's trying something that (as far as I know) no one else has had the nerve to. Moderna is trying to use messenger RNAs as therapies, to stimulate the body's own cells to produce more of some desired protein product. This is the flip side of antisense and RNA interference, where you throw a wrench into the transcription/translation machinery to cut down on some protein. Moderna's trying to make the wheels spin in the other direction.

This is the sort of idea that makes me feel as if there are two people inhabiting my head. One side of me is very excited and interested to see if this approach will work, and the other side is very glad that I'm not one of the people being asked to do it. I've always thought that messing up or blocking some process was an easier task than making it do the right thing (only more so), and in this case, we haven't even reliably shown that blocking such RNA pathways is a good way to a therapy.

I also wonder about the disease areas that such a therapy would treat, and how amenable they are to the approach. The first one that occurs to a person is "Allow Type I diabetics to produce their own insulin", but if your islet cells have been disrupted or killed off, how is that going to work? Will other cell types recognize the mRNA-type molecules you're giving, and make some insulin themselves? If they do, what sort of physiological control will they be under? Beta-cells, after all, are involved in a lot of complicated signaling to tell them when to make insulin and when to lay off. I can also imagine this technique being used for a number of genetic disorders, where we know what the defective protein is and what it's supposed to be. But again, how does the mRNA get to the right tissues at the right time? Protein expression is under so many constraints and controls that it seems almost foolhardy to think that you could step in, dump some mRNA on the process, and get things to work the way that you want them to.

But all that said, there's no substitute for trying it out. And the people behind Moderna are not fools, either, so you can be sure that these questions (and many more) have crossed their minds already. (The company's press materials claim that they've addressed the cellular-specificity problem, for example). They've gotten a very favorable deal from AstraZeneca - admittedly a rather desperate company - but good enough that they must have a rather convincing story to tell with their internal data. This is the very picture of a high-risk, high-reward approach, and I wish them success with it. A lot of people will be watching very closely.

Comments (37) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Development

March 15, 2013

More ENCODE Skepticism

Email This Entry

Posted by Derek

There's another paper out expressing worries about the interpretation of the ENCODE data. (For the last round, see here). The wave of such publications seems to be largely a function of how quickly the various authors could assemble their manuscripts, and how quickly the review process has worked at the various journals. You get the impression that a lot of people opened up new word processor windows and started typing furiously right after all the press releases last fall.

This one, from W. Ford Doolittle at Dalhousie, explicitly raises a thought experiment that I think has occurred to many critics of the ENCODE effort. (In fact, it's the very one that showed up in a comment here to the last post I did on the subject). Here's how it goes: The expensive, toxic, only-from-licensed-sushi-chefs puffer­fish (Takifugu rubripes) has about 365 million base pairs, with famously little of it looking like junk. By contrast, the marbled lungfish (Protopterus aethiopicus) has a humungous genome, 133 billion base pairs, which is apparently enough to code for three hundred different puffer fish with room to spare. Needless to say, the lungfish sequence features vast stretches of apparent junk DNA. Or does it need saying? If an ENCODE-style effort had used the marbled lungfish instead of humans as its template, would it have told us that 80% of its genome was functional? If it had done the pufferfish simultaneously, what would it have said about the difference between the two?

I'm glad that the new PNAS paper lays this out, because to my mind, that's a damned good question. One ENCODE-friendly answer is that the marbled lungfish has been under evolutionary pressure that the fugu pufferfish hasn't, and that it needs many more regulatory elements, spacers, and so on. But that, while not impossible, seems to be assuming the conclusion a bit too much. We can't look at a genome, decide that whatever we see is good and useful just because it's there, and then work out what its function must then be. That seems a bit too Panglossian: all is for the best in the best of all possible genomes, and if a lungfish needs one three hundreds times larger than the fugu fish, well, it must be three hundred times harder to be a lungfish? Such a disparity between the genomes of two organisms, both of them (to a first approximation) running the "fish program", could also be explained by there being little evolutionary pressure against filling your DNA sequence with old phone books.

Here's an editorial at Nature about this new paper:

There is a valuable and genuine debate here. To define what, if anything, the billions of non-protein-coding base pairs in the human genome do, and how they affect cellular and system-level processes, remains an important, open and debatable question. Ironically, it is a question that the language of the current debate may detract from. As Ewan Birney, co-director of the ENCODE project, noted on his blog: “Hindsight is a cruel and wonderful thing, and probably we could have achieved the same thing without generating this unneeded, confusing discussion on what we meant and how we said it”

He's right - the ENCODE team could have presented their results differently, but doing that would not have made a gigantic splash in the world press. There wouldn't have been dozens of headlines proclaiming the "end of junk DNA" and the news that 80% of the genome is functional. "Scientists unload huge pile of genomic data analysis" doesn't have the same zing. And there wouldn't have been the response inside the industry that has, in fact, occurred. This comment from my first blog post on the subject is still very much worth keeping in mind:

With my science hat on I love this stuff, stepping into the unknown, finding stuff out. With my pragmatic, applied science, hard-nosed Drug Discovery hat on, I know that it is not going to deliver over the time frame of any investment we can afford to make, so we should stay away.

However, in my big Pharma, senior leaders are already jumping up and down, fighting over who is going to lead the new initiative in this exciting new area, who is going to set up a new group, get new resources, set up collaborations, get promoted etc. Oh, and deliver candidates within 3 years.

Our response to new basic science is dumb and we are failing our investors and patients. And we don't learn.

Comments (16) + TrackBacks (0) | Category: Biological News

March 7, 2013

Probing A Binding Tunnel With AFM

Email This Entry

Posted by Derek

UCP%20AFM.jpgEvery so often I've mentioned some of the work being done with atomic force microscopy (AFM), and how it might apply to medicinal chemistry. It's been used to confirm a natural product structural assignment, and then there are images like these. Now comes a report of probing a binding site with the technique. The experimental setup is shown at left. The group (a mixed team from Linz, Vienna, and Berlin) reconstituted functional uncoupling protein 1 (UCP1) in a lipid bilayer on a mica surface. Then they ran two different kinds of ATM tips across them - one with an ATP molecule attached, and another with an anti-UCP1 antibody, and with different tether links on them as well.

What they found was that ATP seems to be able to bind to either side of the protein (some of the UCPs in the bilayer were upside down). There also appears to be only one nucleotide binding site per UCP (in accordance with the sequence). That site is about 1.27 nM down into the central pore, which could well be a particular residue (R182) that is thought to protrude into the pore space. Interestingly, although ATP can bind while coming in from either direction, it has to go in deeper from one side than the other (which shows up in the measurements with different tether lengths). And the leads to the hypothesis that the deeper-binding mode sets off conformational changes in the protein that the shallow-binding mode doesn't - which could explain how the protein is able to function while its cytosolic side is being exposed to high concentrations of ATP.

For some reason, these sorts of direct physical measurements weird me out more than spectroscopic studies. Shining light or X-rays into something (or putting it into a magnetic field) just seems more removed. But a single molecule on an AFM tip seems, when a person's hand is on the dial, to somehow be the equivalent of a long, thin stick that we're using to poke the atomic-level structure. What can I say; a vivid imagination is no particular handicap in this business!

Comments (6) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

February 25, 2013

ENCODE: The Nastiest Dissent I've Seen in Quite Some Time

Email This Entry

Posted by Derek

Last fall we had the landslide of data from the ENCODE project, along with a similar landslide of headlines proclaiming that 80% of the human genome was functional. That link shows that many people (myself included) were skeptical of this conclusion at the time, and since then others have weighed in with their own doubts.

A new paper, from Dan Graur at Houston (and co-authors from Houston and Johns Hopkins) is really stirring things up. And whether you agree with its authors or not, it's well worth reading - you just don't see thunderous dissents like this one in the scientific literature very often. Here, try this out:

Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that (at least 70%) of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect.

Other than that, things are fine. The paper goes on to detailed objections in each of those categories, and the tone does not moderate. One of the biggest objections is around the use of the word "function". The authors are at pains to distinguish selected effect functions from causal role functions, and claim that one of the biggest shortcomings of the ENCODE claims is that they blur this boundary. "Selected effects" are what most of us think about as well-proven functions: a TATAAA sequence in the genome binds a transcription factor, with effects on the gene(s) downstream of it. If there is a mutation in this sequence, there will almost certainly be functional consequences (and these will almost certainly be bad). If, however, imagine a random sequence of nucelotides that's close enough to TATAAA to bind a transcription factor. But in this case, there are no functional consequences - genes aren't transcribed differently, and nothing really happens other than the transcription factor parking there once in a while. That's a "causal role" function, and the whopping majority of the ENCODE functions appear to be in this class. "It looks sort of like something that has a function, therefore it has one". And while this can lead to discoveries, you have to be careful:

The causal role concept of function can lead to bizarre outcomes in the biological sciences. For example, while the selected effect function of the heart can be stated unambiguously to be the pumping of blood, the heart may be assigned many additional causal role functions, such as adding 300 grams to body weight, producing sounds, and preventing the pericardium from deflating onto itself. As a result, most biologists use the selected effect concept of function. . .

A mutation in that random TATAAA-like sequence would be expected to be silent compared to what would happen in a real binding motif. So one would want to know what percent of the genome is under selection pressure - that is, what part of it is unlikely to be mutatable without something happening. Those studies are where we get the figures of perhaps 10% of the DNA sequence being functional. Almost all of what ENCODE has declared to be functional, though, can show mutations with relative impunity:

From an evolutionary viewpoint, a function can be assigned to a DNA sequence if and only if it is possible to destroy it. All functional entities in the universe can be rendered nonfunctional by the ravages of time, entropy, mutation, and what have you. Unless a genomic functionality is actively protected by selection, it will accumulate deleterious mutations and will cease to be functional. The absurd alternative, which unfortunately was adopted by ENCODE, is to assume that no deleterious mutations can ever occur in the regions they have deemed to be functional. Such an assumption is akin to claiming that a television set left on and unattended will still be in working condition after a million years because no natural events, such as rust, erosion, static electricity, and earthquakes can affect it. The convoluted rationale for the decision to discard evolutionary conservation and constraint as the arbiters of functionality put forward by a lead ENCODE author (Stamatoyannopoulos 2012) is groundless and self-serving.

Basically, if you can't destroy a function by mutation, then there is no function to destroy. Even the most liberal definitions take this principle to apply to about 15% of the genome at most, so the 80%-or-more figure really does stand out. But this paper has more than philosophical objections to the ENCODE work. They point out that the consortium used tumor cell lines for its work, and that these are notoriously permissive in their transcription. One of the principles behind the 80% figure is that "if it gets transcribed, it must have a function", but you can't say that about HeLa cells and the like, which read off all sorts of pseudogenes and such (introns, mobile DNA elements, etc.)

One of the other criteria the ENCODE studies used for assigning function was histone modification. Now, this bears on a lot of hot topics in drug discovery these days, because an awful lot of time and effort is going into such epigenetic mechanisms. But (as this paper notes), this recent study illustrated that all histone modifications are not equal - there may, in fact, be a large number of silent ones. Another ENCODE criterion had to do with open (accessible) regions of chromatin, but there's a potential problem here, too:

They also found that more than 80% of the transcription start sites were contained within open chromatin regions. In yet another breathtaking example of affirming the consequent, ENCODE makes the reverse claim, and adds all open chromatin regions to the “functional” pile, turning the mostly true statement “most transcription start sites are found within open chromatin regions” into the entirely false statement “most open chromatin regions are functional transcription start sites.”

Similar arguments apply to the 8.5% of the genome that ENCODE assigns to transcription factor binding sites. When you actually try to experimentally verify function for such things, the huge majority of them fall out. (It's also noted that there are some oddities in ENCODE's definitions here - for example, they seem to be annotating 500-base stretches as transcription factor binding sites, when most of the verified ones are below 15 bases in length).

Now, it's true that the ENCODE studies did try to address the idea of selection on all these functional sequences. But this new paper has a lot of very caustic things to say about the way this was done, and I'll refer you to it for the full picture. To give you some idea, though:

By choosing primate specific regions only, ENCODE effectively removed everything that is of interest functionally (e.g., protein coding and RNA-specifying genes as well as evolutionarily conserved regulatory regions). What was left consisted among others of dead transposable and retrotransposable elements. . .

. . .Because polymorphic sites were defined by using all three human samples, the removal of two samples had the unfortunate effect of turning some polymorphic sites into monomorphic ones. As a consequence, the ENCODE data includes 2,136 alleles each with a frequency of exactly 0. In a miraculous feat of “next generation” science, the ENCODE authors were able to determine the frequencies of nonexistent derived alleles.

That last part brings up one of the objections that many people many have to this paper - it does take on a rather bitter tone. I actually don't mind it - who am I to object, given some of the things I've said on this blog? But it could be counterproductive, leading to arguments over the insults rather than arguments over the things being insulted (and over whether they're worthy of the scorn). People could end up waving their hands and running around shouting in all the smoke, rather than figuring out how much fire there is and where it's burning. The last paragraph of the paper is a good illustration:

The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.

Well, maybe that was necessary. The amount of media hype was huge, and the only way to counter it might be to try to generate a similar amount of noise. It might be working, or starting to work - normally, a paper like this would get no popular press coverage at all. But will it make CNN? The Science section of the New York Times? ENCODE's results certainly did.

But what the general public things about this controversy is secondary. The real fight is going to be here in the sciences, and some of it is going to spill out of academia and into the drug industry. As mentioned above, a lot of companies are looking at epigenetic targets, and a lot of companies would (in general) very much like to hear that there are a lot more potential drug targets than we know about. That was what drove the genomics frenzy back in 1999-2000, an era that was not without its consequences. The coming of the ENCODE data was (for some people) the long-delayed vindication of the idea that gene sequencing was going to lead to a vast landscape of new disease targets. There was already a comment on my entry at the time suggesting that some industrial researchers were jumping on the ENCODE work as a new area to work in, and it wouldn't surprise me to see many others thinking similarly.

But we're going to have to be careful. Transcription factors and epigenetic mechanisms are hard enough to work on, even when they're carefully validated. Chasing after ephemeral ones would truly be a waste of time. . .

More reactions around the science blogging world: Wavefunction, Pharyngula, SciLogs, Openhelix. And there are (and will be) many more.

Comments (24) + TrackBacks (0) | Category: Biological News

February 13, 2013

Mouse Models of Inflammation Are Basically Worthless. Now We Know.

Email This Entry

Posted by Derek

We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.

But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.

These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.

Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.

What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.

And there are a lot of potential reasons why this turns out to be so:

There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.

But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.

There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):

The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.

The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.

The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.

“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”

“That started us thinking,” he continued. “Is it the same in the mouse or not?”

What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.

We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.

Comments (50) + TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases

February 12, 2013

Do We Really Know the Cause for Over 4500 Diseases?

Email This Entry

Posted by Derek

Since I mentioned the NIH in the context of the Molecular Libraries business, I wanted to bring up something else that a reader sent along to me. There's a persistent figure that's floated whenever the agency talks about translational medicine: 4500 diseases. Here's an example:

Therapeutic development is a costly, complex and time-consuming process. In recent years, researchers have succeeded in identifying the causes of more than 4,500 diseases. But it has proven difficult to turn such knowledge into new therapies; effective treatments exist for only about 250 of these conditions.

It shows up again in this paper, just out, and elsewhere. But is it true?

Do we really know the causes of 4,500 diseases? Outside of different cancer cellular types and various infectious agents, are there even 4,500 diseases, total? And if not, how many are there, anyway, then? I ask because that figure seems rather high. There are a lot of single-point-mutation genetic disorders to which we can pretty confidently assign a cause, but some of them (cystic fibrosis, for example) are considered one disease even though they can be arrived at through a variety of mutations. Beyond that, do we really know the absolute molecular-level cause of, say, type II diabetes? (We know a lot of very strong candidates, but the interplay between them, now, there's the rub). Alzheimer's? Arthritis? Osteoporosis? Even in the cases where we have a good knowledge of what the proximate cause of the trouble is (thyroid insufficiency, say, or Type I diabetes), do we really know what brought on that state, or how to prevent it? Sometimes, but not very often, is my impression. So where does this figure come from?

The best guess is here, GeneMap. But read the fine print: "Phenotypes include single-gene mendelian disorders, traits, some susceptibilities to complex disease . . . and some somatic cell genetic disease. . ." My guess is that a lot of what's under that banner does not rise to "knowing the cause", but I'd welcome being corrected on that point.

Comments (22) + TrackBacks (0) | Category: Biological News

January 30, 2013

Farewell to Bioinformatics

Email This Entry

Posted by Derek

Here are some angry views that I don't necessarily endorse, but I can't say that they're completely wrong, either. A programmer bids an angry farewell to the bioinformatics world:

Bioinformatics is an attempt to make molecular biology relevant to reality. All the molecular biologists, devoid of skills beyond those of a laboratory technician, cried out for the mathematicians and programmers to magically extract science from their mountain of shitty results.

And so the programmers descended and built giant databases where huge numbers of shitty results could be searched quickly. They wrote algorithms to organize shitty results into trees and make pretty graphs of them, and the molecular biologists carefully avoided telling the programmers the actual quality of the results. When it became obvious to everyone involved that a class of results was worthless, such as microarray data, there was a rush of handwaving about “not really quantitative, but we can draw qualitative conclusions” followed by a hasty switch to a new technique that had not yet been proved worthless.

And the databases grew, and everyone annotated their data by searching the databases, then submitted in turn. No one seems to have pointed out that this makes your database a reflection of your database, not a reflection of reality. Pull out an annotation in GenBank today and it’s not very long odds that it’s completely wrong.

That's unfair to molecular biologists, but is it unfair to the state of bioinformatic databases? Comments welcome. . .

Update: more comments on this at Ycombinator.

Comments (62) + TrackBacks (0) | Category: Biological News | In Silico

January 15, 2013

Is Obesity An Infectious Disease?

Email This Entry

Posted by Derek

Like many people, I have a weakness for "We've had it all wrong!" explanations. Here's another one, or part of one: is obesity an infectious disease?

During our clinical studies, we found that Enterobacter, a genus of opportunistic, endotoxin-producing pathogens, made up 35% of the gut bacteria in a morbidly obese volunteer (weight 174.8 kg, body mass index 58.8 kg m−2) suffering from diabetes, hypertension and other serious metabolic deteriorations. . .

. . .After 9 weeks on (a special diet), this Enterobacter population in the volunteer's gut reduced to 1.8%, and became undetectable by the end of the 23-week trial, as shown in the clone library analysis. The serum–endotoxin load, measured as LPS-binding protein, dropped markedly during weight loss, along with substantial improvement of inflammation, decreased level of interleukin-6 and increased adiponectin. Metagenomic sequencing of the volunteer's fecal samples at 0, 9 and 23 weeks on the WTP diet confirmed that during weight loss, the Enterobacteriaceae family was the most significantly reduced population. . .

They went on to do the full Koch workup, by taking an isolated Enterobacter strain from the human patient and introducing it into gnotobiotic (germ-free) mice. These mice are usually somewhat resistant to becoming obese on a high-fat diet, but after being inoculated with the bacterial sample, they put on substantial weight, became insulin resistant, and showed numerous (consistent) alterations in their lipid and glucose handling pathways. Interestingly, the germ-free mice that were inoculated with bacteria and fed normal chow did not show these effects.

The hypothesis is that the endotoxin-producing bacteria are causing a low-grade chronic inflammation in the gut, which is exacerbated to a more systemic form by the handling of excess lipids and fatty acids. The endotoxin itself may be swept up in the chylomicrons and translocated through the gut wall. The summary:

. . .This work suggests that the overgrowth of an endotoxin-producing gut bacterium is a contributing factor to, rather than a consequence of, the metabolic deteriorations in its human host. In fact, this strain B29 is probably not the only contributor to human obesity in vivo, and its relative contribution needs to be assessed. Nevertheless, by following the protocol established in this study, we hope to identify more such obesity-inducing bacteria from various human populations, gain a better understanding of the molecular mechanisms of their interactions with other members of the gut microbiota, diet and host for obesity, and develop new strategies for reducing the devastating epidemic of metabolic diseases.

Considering the bacterial origin of ulcers, I think this is a theory that needs to be taken seriously, and I'm glad to see it getting checked out. We've been hearing a lot the last few years about the interaction between human physiology and our associated bacterial population, but the attention is deserved. The problem is, we're only beginning to understand what these ecosystems are like, how they can be disordered, and what the consequences are. Anyone telling you that they have it figured out at this point is probably trying to sell you something. It's worth the time to figure out, though. . .

Comments (32) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases

January 14, 2013

Another Reactive Oxygen Paper

Email This Entry

Posted by Derek

Picking up on that reactive oxygen species (ROS) business from the other day (James Watson's paper suggesting that it could be a key anticancer pathway), I wanted to mention this new paper, called to my attention this morning by a reader. It's from a group at Manchester studying regeneration of tissue in Xenopus tadpoles, and they note high levels of intracellular hydrogen peroxide in the regenerating tissue. Moreover, antioxidant treatment impaired the regeneration, as did genetic manipulation of ROS generation.

Now, inflammatory cells are known to produce plenty of ROS, and they're also involved in tissue injury. But that doesn't seem to be quite the connection here, because the tissue ROS levels peaked before the recruitment of such cells did. (This is consistent with previous work in zebrafish, which also showed hydrogen peroxide as an essential signal in wound healing). The Manchester group was able to genetically impair ROS generation by knocking down a protein in the NOX enzyme complex, a major source of ROS production. This also impaired regeneration, an effect that could be reversed by a rescue competition experiment.

Further experiments implicated Wnt/bet-catenin signaling in this process, which is certainly plausible, given the position of that cascade in cellular processes. That also ties in with a 2006 report of hydrogen peroxide signaling through this pathway (via a protein called nucleoredoxin.

You can see where this work is going, and so can the authors:

. . .our work suggests that increased production of ROS plays a critical role in facilitating Wnt signalling following injury, and therefore allows the regeneration program to commence. Given the ubiquitous role of Wnt signalling in regenerative events, this finding is intriguing as it might provide a general mechanism for injury-induced Wnt signalling activation across all regeneration systems, and furthermore, manipulating ROS may provide a means to induce the activation of a regenerative program in those cases where regeneration is normally limited.

Most of us reading this site belong to one of those regeneration-limited species, but perhaps it doesn't always have to be this way? Taken together, it does indeed look like (1) ROS (hydrogen peroxide among others) are important intracellular signaling molecules (which conclusion has been clear for some time now), and (2) the pathways involved are crucial growth and regulatory ones, relating to apoptosis, wound healing, cancer, the effects of exercise, all very nontrivial things indeed, and (3) these pathways would appear to be very high-value ones for pharmaceutical intervention (stay tuned).

As a side note, Paracelsus has once again been reaffirmed: the dose does indeed make the poison, as does its timing and location. Water can drown you, oxygen can help burn you, but both of them keep you alive.

Comments (5) + TrackBacks (0) | Category: Biological News

January 11, 2013

Reactive Oxygen Species Are Your Friends!

Email This Entry

Posted by Derek

The line under James Watson's name reads, of course, "Co-discoverer of DNA's structure. Nobel Prize". But it could also read "Provocateur", since he's been pretty good at that over the years. He seems to have the right personality for it - both The Double Helix (fancy new edition there) and its notorious follow-up volume Avoid Boring People illustrate the point. There are any number of people who've interacted with him over the years who can't stand the guy.

But it would be a simpler world if everyone that we found hard to take was wrong about everything, wouldn't it? I bring this up because Watson has published an article, again deliberately provocative, called "Oxidants, Antioxidants, and the Current Incurability of Metastatic Cancers". Here's the thesis:

The vast majority of all agents used to directly kill cancer cells (ionizing radiation, most chemotherapeutic agents and some targeted therapies) work through either directly or indirectly generating reactive oxygen species that block key steps in the cell cycle. As mesenchymal cancers evolve from their epithelial cell progenitors, they almost inevitably possess much-heightened amounts of antioxidants that effectively block otherwise highly effective oxidant therapies.

The article is interesting throughout, but can fairly be described as "rambling". He starts with details of the complexity of cancerous mutations, which is a topic that's come up around here several times (as it does wherever potential cancer therapies are discussed, at least by people with some idea of what they're talking about). Watson is paying particular attention here to mesenchymal tumors:

Resistance to gene-targeted anti-cancer drugs also comes about as a consequence of the radical changes in underlying patterns of gene expression that accompany the epithelial-to-mesenchymal cell transitions (EMTs) that cancer cells undergo when their surrounding environments become hypoxic [4]. EMTs generate free-floating mesenchymal cells whose flexible shapes and still high ATP-generating potential give them the capacity for amoeboid cell-like movements that let them metastasize to other body locations (brain, liver, lungs). Only when they have so moved do most cancers become truly life-threatening. . .

. . .Unfortunately, the inherently very large number of proteins whose expression goes either up or down as the mesenchymal cancer cells move out of quiescent states into the cell cycle makes it still very tricky to know, beyond the cytokines, what other driver proteins to focus on for drug development.

That it does. He makes the case (as have others) that Myc could be one of the most important protein targets - and notes (as have others!) that drug discovery efforts against the Myc pathway have run into many difficulties. There's a good amount of discussion about BRD4 compounds as a way to target Myc. Then he gets down to the title of the paper and starts talking about reactive oxygen species (ROS). Links in the section below added by me:

That elesclomol promotes apoptosis through ROS generation raises the question whether much more, if not most, programmed cell death caused by anti-cancer therapies is also ROS-induced. Long puzzling has been why the highly oxygen sensitive ‘hypoxia-inducible transcription factor’ HIF1α is inactivated by both the, until now thought very differently acting, ‘microtubule binding’ anti-cancer taxanes such as paclitaxel and the anti-cancer DNA intercalating topoisomerases such as topotecan or doxorubicin, as well as by frame-shifting mutagens such as acriflavine. All these seemingly unrelated facts finally make sense by postulating that not only does ionizing radiation produce apoptosis through ROS but also today's most effective anti-cancer chemotherapeutic agents as well as the most efficient frame-shifting mutagens induce apoptosis through generating the synthesis of ROS. That the taxane paclitaxel generates ROS through its binding to DNA became known from experiments showing that its relative effectiveness against cancer cell lines of widely different sensitivity is inversely correlated with their respective antioxidant capacity. A common ROS-mediated way through which almost all anti-cancer agents induce apoptosis explains why cancers that become resistant to chemotherapeutic control become equally resistant to ionizing radiotherapy. . .

. . .The fact that cancer cells largely driven by RAS and Myc are among the most difficult to treat may thus often be due to their high levels of ROS-destroying antioxidants. Whether their high antioxidative level totally explains the effective incurability of pancreatic cancer remains to be shown. The fact that late-stage cancers frequently have multiple copies of RAS and MYC oncogenes strongly hints that their general incurability more than occasionally arises from high antioxidant levels.

He adduces a number of other supporting evidence for this line of thought, and then he gets to the take-home message:

For as long as I have been focused on the understanding and curing of cancer (I taught a course on Cancer at Harvard in the autumn of 1959), well-intentioned individuals have been consuming antioxidative nutritional supplements as cancer preventatives if not actual therapies. The past, most prominent scientific proponent of their value was the great Caltech chemist, Linus Pauling, who near the end of his illustrious career wrote a book with Ewan Cameron in 1979, Cancer and Vitamin C, about vitamin C's great potential as an anti-cancer agent [52]. At the time of his death from prostate cancer in 1994, at the age of 93, Linus was taking 12 g of vitamin C every day. In light of the recent data strongly hinting that much of late-stage cancer's untreatability may arise from its possession of too many antioxidants, the time has come to seriously ask whether antioxidant use much more likely causes than prevents cancer.

All in all, the by now vast number of nutritional intervention trials using the antioxidants β-carotene, vitamin A, vitamin C, vitamin E and selenium have shown no obvious effectiveness in preventing gastrointestinal cancer nor in lengthening mortality [53]. In fact, they seem to slightly shorten the lives of those who take them. Future data may, in fact, show that antioxidant use, particularly that of vitamin E, leads to a small number of cancers that would not have come into existence but for antioxidant supplementation. Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer.

Now this is quite interesting. The first thing I thought of when I read this was the work on ROS in exercise. This showed that taking antioxidants appeared to cancel out the benefits of exercise, probably because reactive oxygen species are the intracellular signal that sets them off. Taken together, I think we need to seriously consider whether efforts to control ROS are, in fact, completely misguided. They are, perhaps, "essential poisons", without which our cellular metabolism loses its way.

Update: I should also note the work of Joan Brugge's lab in this area, blogged about here. Taken together, you'd really have to advise against cancer patients taking antioxidants, wouldn't you?

Watson ends up the article by suggesting, none too diplomatically, that much current cancer research is misguided:

The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope [54]. Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute's (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today's cancer research establishments. They still are too closely wedded to moving forward with cocktails of drugs targeted against the growth promoting molecules (such as HER2, RAS, RAF, MEK, ERK, PI3K, AKT and mTOR) of signal transduction pathways instead of against Myc molecules that specifically promote the cell cycle.

He singles out the Cancer Genome Atlas project as an example of this sort of thing, saying that while he initially supported it, he no longer does. It will, he maintains, tend to find mostly cancer cell "drivers" as opposed to "vulnerabilities". He's more optimistic about a big RNAi screening effort that's underway at his own Cold Spring Harbor, although he admits that this enthusiasm is "far from universally shared".

We'll find out which is the more productive approach - I'm glad that they're all running, personally, because I don' think I know enough to bet it all on one color. If Watson is right, Pfizer might be the biggest beneficiary in the drug industry - if, and it's a big if, the RNAi screening unearths druggable targets. This is going to be a long-running story - I'm sure that we'll be coming back to it again and again. . .

Comments (21) + TrackBacks (0) | Category: Biological News | Cancer

December 21, 2012

The Last Thing a Professor Wants to Hear

Email This Entry

Posted by Derek

This can't be good. A retraction in PNAS on some RNA-driven cell death research from a lab at Caltech:

Anomalous experimental results observed by multiple members of the Pierce lab during follow-on studies raised concerns of possible research misconduct. An investigation committee of faculty at the California Institute of Technology indicated in its final report on this matter that the preponderance of the evidence and the reasons detailed in the report established that the first author falsified and misrepresented data published in this paper. An investigation at the United States Office of Research Integrity is ongoing.

As that link from Retraction Watch notes, the first author himself was not one of the signees of that retraction statement - as one might well think - and he now appears to be living in London. He appears to have left quite a mess behind in Pasadena.

Comments (14) + TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature

December 12, 2012

Sue the Nobel Committee. Yeah, That'll Work.

Email This Entry

Posted by Derek

Rongxiang Xu is upset with this year's Nobel Prize award for stem cell research. He believes that work he did is so closely related to the subject of the prize that. . .he wants his name on it? No, apparently not. That he wants some of the prize money? Nope, not that either. That he thinks the prize was wrongly awarded? No, he's not claiming that.

What he's claiming is that the Nobel Committee has defamed his reputation as a stem cell pioneer by leaving him off, and he wants damages. Now, this is a new one, as far as I know. The closest example comes from 2003, when there was an ugly controversy over the award for NMR imaging (here's a post from the early days of this blog about it). Dr. Raymond Damadian took out strongly worded (read "hopping mad") advertisement in major newspapers claiming that the Nobel Committee had gotten the award wrong, and that he should have been on it. In vain. The Nobel Committee(s) have never backed down in such a case - although there have been some where you could make a pretty good argument - and they never will, as far as I can see.

Xu, who works in Los Angeles, is founder and chairman of the Chinese regenerative medicine company MEBO International Group. The company sells a proprietary moist-exposed burn ointment (MEBO) that induces "physiological repair and regeneration of extensively wounded skin," according to the company's website. Application of the wound ointment, along with other treatments, reportedly induces embryonic epidermal stem cells to grow in adult human skin cells. . .

. . .Xu's team allegedly awakened intact mature somatic cells to turn to pluripotent stem cells without engineering in 2000. Therefore, Xu claims, the Nobel statement undermines his accomplishments, defaming his reputation.

Now, I realize that I'm helping, in my small way, to give this guy publicity, which is one of the things he most wants out of this effort. But let me make myself clear - I'm giving him publicity in order to roll my eyes at him. I look forward to following Xu's progress through the legal system, and I'll bet his legal team looks forward to it as well, as long as things are kept on a steady payment basis.

Comments (21) + TrackBacks (0) | Category: Biological News

November 8, 2012

Picosecond Protein Watching

Email This Entry

Posted by Derek

We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).

But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.

Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.

But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
PNASpyp.jpg
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).

And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.

If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:

The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.

Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.

Comments (13) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical Biology | Chemical News

October 10, 2012

The 2012 Nobel In Chemistry. Yes, Chemistry.

Email This Entry

Posted by Derek

A deserved Nobel? Absolutely. But the grousing has already started. The 2012 Nobel Prize for Chemistry has gone to Bob Lefkowitz (Duke) and Brian Kobilka (Stanford) for GPCRs, G-protein coupled receptors.

Update: here's an excellent overview of Kobilka's career and research.

Everyone who's done drug discovery knows what GPCRs are, and most of us have worked on molecules to target them at one point or another. At least a third of marketed drugs, after all, are GPCR ligands, so their importance is hard to overstate. That's why I say that this Nobel is completely deserved (and has been anticipated for some time now). I've written about them numerous times here over the years, and I'm going to forgo the chance to explain them in detail again. For more information I can recommend the Nobel site's popular background and their more detailed scientific background - they've already done the explanatory work.

I will say a bit about where GPCRs fit into the world of drug targets, though, since they've been so important to pharma R&D. Everyone had realized, for decades (more like centuries), that cells had to be able to send signal to each other somehow. But how was this done? No matter what, there had to be some sort of transducer mechanism, because any signal would arrive on the outside of the cell membrane and then (somehow) be carried across and set off activity inside the cell. As it became clear that small molecules (both the body's own and artificial ones from outside) could have signaling effects, the idea of a "receptor" became inescapable. But it's worth remembering that up until the mid-1970s you could find people - in print, no less - warning readers that the idea of a receptor as a distinct physical object was unproven and could be an unwarranted assumption. Everyone knew that molecular signals were being handled somehow, but it was very unclear what (or how many) pieces there were to the process. This year's award recognizes the lifting of that fog.

It also recognizes something else very important, and here I want to rally my fellow chemists. As I mentioned above, the complaints are already starting that this is yet another chemistry prize that's been given to the biologists. But this is looking at things the wrong way around. Biology isn't invading chemistry - biology is turning into chemistry. Giving the prize this year to Lefkowitz and Kobilka takes us from the first cloning of a GPCR (biology, biology all the way) to a detailed understanding of their molecular structure (chemistry!) And that's the story of molecular biology for you, right there. As it lives up to its name, its practitioners have had to start thinking of their tools and targets as real, distinct molecules. They have shapes, they have functional groups, they have stereochemistry and localized charges and conformations. They're chemicals. That's what kept occurring to me at the recent chemical biology conference I attended: anyone who's serious about understanding this stuff has to understand it in terms of chemistry, not in terms of "this square interacts with this circle, which has an arrow to this box over here, which cycles to this oval over here with a name in the middle of it. . ." Those old schematics will only take you so far.

So, my fellow chemists, cheer the hell up already. Vast new territories are opening up to our expertise and our ways of looking at the world, and we're going to be needed to understand what to do next. Too many people are making me think of those who objected to the Louisiana Purchase or the annexation of California, who wondered what we could possibly ever want with those trackless wastelands to the West and how they could ever be part of the country. Looking at molecular biology and sighing "But it's not chemistry. . ." misses the point. I've had to come around to this view myself, but more and more I'm thinking it's the right one.

Comments (53) + TrackBacks (0) | Category: Biological News | Chemical News

September 13, 2012

ENCODE And What It All Means

Email This Entry

Posted by Derek

You'll have heard about the massive data wave that hit (30 papers!) courtesy of the ENCODE project. That stands for Encyclopedia of DNA Elements, and it's been a multiyear effort to go beyond the bare sequence of human DNA and look for functional elements. We already know that only around 1% of the human sequence is made up of what we can recognize as real, traditional genes: stretches that code for proteins, have start and stop codons, and so on. And it's not like that's so straightforward, either, what with all the introns and whatnot. But that leaves an awful lot of DNA that's traditionally been known by the disparaging name of "junk", and sure it can't just be that - can it?

Some of it does its best to make you think that way, for sure. Transposable elements like Alu sequences, which are repeated relentlessly hundreds of thousands of times throughout the human DNA sequence, must either be junk, inert spacer, or so wildly important that we just can't have too many copies of them. But DNA is three-dimensional (and how), and its winding and unwinding is crucial to gene expression. Surely a good amount of that apparently useless stuff is involved in these processes and other epigenetic phenomena.

And the ENCODE group has indeed discovered a lot of this sort of thing. But as this excellent overview from Brendan Maher at Nature shows, it hasn't discovered quite as many as the headlines might lead you to think. (And neither has it demolished the idea that all the 99% of noncoding DNA is junk, because you can't find anyone who believed that one, either). The figure that's in all the press writeups is that this work has assigned functions for 80% of the human genome, which would be an astonishing figure on several levels. For one thing, it would mean that we'd certainly missed an awful lot before, and for another, it would mean that the genome is a heck of a lot more information-rich than we ever thought it might be.

But neither of those quite seem to be the case. It all depends on what you mean by "functional", and opinions most definitely vary. See this post by Ed Yong for some of the categories. which range out to some pretty broad, inclusive definitions of "function". A better estimate is that maybe 20% of the genome can directly influence gene expression, which is very interesting and useful, but ain't no 80%, either. That Nature post provides a clear summary of the arguments about these figures.

But even that more-solid 20% figure is going to keep us all busy for a long time. Learning how to affect these gene transcription mechanisms is going should be a very important route to new therapies. If you remember all the hype about how the genome was going to unlock cures to everything - well, this is the level we're actually going to have to work at to make anything in that line come true. There's a lot of work to be done, though. Somehow, different genes are expressed at different times, in different people, in response to a huge variety of environmental cues. It's quite a tangle, but in theory, it's a tangle that can be unraveled, and as it does, it's going to provide a lot of potential targets for therapy. Not easy targets, mind you - those are probably gone - but targets nonetheless.

One of the best ways to get a handle on all this work is this very interesting literature experiment at Nature - a portal into the ENCODE project data, organized thematically, and with access to all the papers involved across the different journals. If you're interested in epigenetics at all, this is a fine place to read up on the results of this work. And if you're not, it's still worth exploring to see how the scientific literature might be presented and curated. This approach, it seems to me, potentially adds a great deal of value. Eventually, the PDF-driven looks-like-a-page approach to the literature will go extinct, and something else will replace it. Some of it might look a bit like this.

Note, just for housekeeping purposes - I wrote this post for last Friday, but only realized today that it didn't publish, thus the lack of an entry that day. So here it is, better late, I hope, than never. There's more to say about epigenetics, too, naturally. . .

Comments (16) + TrackBacks (0) | Category: Biological News | The Scientific Literature

September 6, 2012

Databases and Money

Email This Entry

Posted by Derek

The NIH has been cutting back on its funding (via the National Libraries of Medicine) for a number of external projects. One of those on the chopping block is the Biological Magnetic Resonance Bank (BMRB), at Wisconsin:

The BMRB mission statement is to “collect, annotate, archive and disseminate (worldwide in the public domain)” NMR data on biological macromolecules and metabolites, to “empower scientists” and to “support further development of the field.” Despite its indisputable success in achieving these goals, the BMRB is facing serious funding challenges.

Since 1990, the BMRB has received continuous support from the National Library of Medicine (NLM), at the US National Institutes of Health, in the form of five-year grants. However, the BMRB obtained its latest grant renewal in 2009, accompanied by a sharp reduction in the funding level. It was also to be the last renewal, as the NLM announced that funding for all external centers would be phased out as their grants expire. Thus, as of today, the BMRB has no means of financial support after September 2014.

That editorial link above, from Nature Structural and Molecular Biology, also has a several other database projects formerly supported by the NLM. These are far enough outside my own field that I've never had call to use any of them as a medicinal chemist, but (as that last link shows) they are indeed used, and by plenty of researchers.

This problem won't be going away, since the volume of data produced these days shows no sign of any inflection points. Molecular genetics, protein biology, and structural biology in general are producing vast piles of material. Having as much of it as possible brought together and curated is clearly in the best interest of scientific research - but again, who pays?

Comments (19) + TrackBacks (0) | Category: Biological News

August 16, 2012

Just How Do Enzymes Work?

Email This Entry

Posted by Derek

How do enzymes work? People have been trying to answer that, in detail, for decades. There's no point in trying to do it without running down all those details, either, because we already know the broad picture: enzymes work by bringing reactive groups together under extremely favorable conditions so that reaction rates speed up tremendously. Great! But how do they bring those things together, how does their reactivity change, and what kinds of favorable conditions are we talking about here?

And some of this we know, too. You can see, in many enzyme active sites, that the protein is stabilizing the transition state of the reaction, lowering its energy so it's easier to jump over the hump to product. It wouldn't surprise me to see the energies of some starting materials being raised to effect that same barrier-lowering, although I don't know of any examples of that off the top of my head. But even this level of detail raises still more questions: what interactions are these that lower and raise these energies? How much of a price is paid, thermodynamically, to do these things, and how does that break out into entropic and enthalpic terms?

Some of those answers are known, to some degree, in some systems. But still more questions remain. One of the big ones has been the degree to which protein motion contributes to enzyme action. Now, we can see some big conformational changes taking place with some proteins, but what about the normal background motions? Intellectually, it makes sense that enzymes would have learned, over the millennia, to take advantage of this, since it's for sure that their structures are always vibrating. But proving that is another thing entirely.

Modern spectroscopy may have done the trick. This new paper from groups at Manchester and Oxford reports painstaking studies on B-12 dependent ethanolamine ammonia lyase. Not an enzyme I'd ever heard of, that one, but "enzymes I've never heard of" is a rather roomy category. It's an interesting one, though, partly because it goes through a free radical mechanism, and partly because it manages to speed things up by about a trillion-fold over the plain solution rate.

Just how it does that has been a mystery. There's no sign of any major enzyme conformational change as the substrate binds, for one thing. But using stopped-flow techniques with IR spectroscopy, as well as ultrafast time-resolved IR, there seem to be structural changes going on at the time scale of the actual reaction. It's hard to see this stuff, but it appears to be there - so what is it? Isotopic labeling experiments seem to say that these IR peaks represent a change in the protein, not the B12 cofactor. (There are plenty of cofactor changes going on, too, and teasing these new peaks out of all that signal was no small feat).

So this could be evidence for protein motion being important right at the enzymatic reaction itself. But I should point out that not everyone's buying that. Nature Chemistry had two back-to-back articles earlier this year, the first advocating this idea, and the second shooting it down. The case against this proposal - which would modify transition-state theory as it's usually understood - is that there can be a number of conformations with different reactivities, some of which take advantage of quantum-mechanical tunneling effects, but all of which perform "traditional" transition-state chemistry, each in their own way. Invoking fast motions (on the femtosecond time scale) to explain things is, in this view, a layer of complexity too far.

I realize that all this can sound pretty esoteric - it does even to full-time chemists, and if you're not a chemist, you probably stopped reading quite a while ago. But we really do need to figure out exactly how enzymes do their jobs, because we'd like to be able to do the same thing. Enzymatic reactions are, in most cases, so vastly superior to our own ways of doing chemistry that learning to make them to order would revolutionize things in several fields at once. We know this chemistry can be done - we see it happen, and the fact that we're alive and walking around depends on it - but we can't do it ourselves. Yet.

Comments (23) + TrackBacks (0) | Category: Biological News | Chemical News

August 2, 2012

Public Domain Databases in Medicinal Chemistry

Email This Entry

Posted by Derek

Here's a useful overview of the public-domain medicinal chemistry databases out there. It covers the big three databases in detail:

BindingDB (quantitative binding data to protein targets).

ChEMBL (wide range of med-chem data, overlaps a bit with PubChem).

PubChem (data from NIH Roadmap screen and many others).

And these others:
Binding MOAD (literature-annotated PDB data).

ChemSpider (26 million compounds from hundreds of data sources).

DrugBank (data on 6700 known drugs).

GRAC and IUPHAR-DB (data on GPCRs, ion channels, and nuclear receptors, and ligands for all of these).

PDBbind (more annotated PDB data).

PDSP Ki (data from UNC's psychoactive drug screening program)

SuperTarget (target-compound interaction database).

Therapeutic Targets Database(database of known and possible drug targets).

ZINC (21 million commercially available compounds, organized by class, downloadable in various formats).

There is the irony of a detail article on public-domain databases appearing behind the ACS paywall, but the literature is full of such moments as that. . .

Comments (11) + TrackBacks (0) | Category: Biological News | Chemical News | Drug Assays

April 10, 2012

Biomarker Caution

Email This Entry

Posted by Derek

After that news of the Stanford professor who underwent just about every "omics" test known, I wrote that I didn't expect this sort of full-body monitoring to become routine in my own lifetime:

It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.

I can now adduce some evidence for that point of view. The Institute of Medicine has warned that a lot of biomarker work is spurious. The recent Duke University scandal has brought these problems into higher relief, but there are plenty of less egregious (and not even deliberate) examples that are still a problem:

The request for the IOM report stemmed in part from a series of events at Duke University in which researchers claimed that their genomics-based tests were reliable predictors of which chemotherapy would be most effective for specific cancer patients. Failure by many parties to detect or act on problems with key data and computational methods underlying the tests led to the inappropriate enrollment of patients in clinical trials, premature launch of companies, and retraction of dozens of research papers. Five years after they were first made public, the tests were acknowledged to be invalid.

Lack of clearly defined development and evaluation processes has caused several problems, noted the committee that wrote the report. Omics-based tests involve large data sets and complex algorithms, and investigators do not routinely make their data and computational procedures accessible to others who could independently verify them. The regulatory steps that investigators and research institutions should follow may be ignored or misunderstood. As a result, flaws and missteps can go unchecked.

So (Duke aside) the problem isn't fraud, so much as it is wishful thinking. And that's what statistical analysis is supposed to keep in check, but we're got to make sure that that's really happening. But to keep everyone honest, we also have to keep everything out there where multiple sets of eyes can check things over, and this isn't always happening:

Investigators should be required to make the data, computer codes, and computational procedures used to develop their tests publicly accessible for independent review and ensure that their data and steps are presented comprehensibly, the report says. Agencies and companies that fund omics research should require this disclosure and support the cost of independently managed databases to hold the information. Journals also should require researchers to disclose their data and codes at the time of a paper's submission. The computational procedures of candidate tests should be recorded and "locked down" before the start of analytical validation studies designed to assess their accuracy, the report adds.

This is (and has been for some years) a potentially huge field of medical research, with huge implications. But it hasn't been moving forward as quickly as everyone thought it would. We have to resist the temptation to speed things up by cutting corners, consciously or unconsciously.

Comments (14) + TrackBacks (0) | Category: Biological News | Clinical Trials

April 6, 2012

Europe Wants Some of That Molecular Library Action

Email This Entry

Posted by Derek

We've talked about the NIH's Molecular Libraries Initiative here a few times, mostly in the context of whether it reached its goals, and what might happen now that it looks as if it might go away completely. Does make this item a little surprising?

Almost a decade ago, the US National Institutes of Health kicked off its Molecular Libraries Initiative to provide academic researchers with access to the high-throughput screening tools needed to identify new therapeutic compounds. Europe now seems keen on catching up.

Last month, the Innovative Medicines Initiative (IMI), a €2 billion ($2.6 billion) Brussels-based partnership between the European Commission and the European Federation of Pharmaceutical Industries and Associations (EFPIA), invited proposals to build a molecular screening facility for drug discovery in Europe that will combine the inquisitiveness of academic scientists with industry know-how. The IMI's call for tenders says the facility will counter “fragmentation” between these sectors.

I can definitely see the worth in that part of the initiative. Done properly, Screening Is Good. But they'll have to work carefully to make sure that their compound collection is worth screening, and to format the assays so that the results are worth looking at. Both those processes (library generation and high-throughput screening) are susceptible (are they ever) to "garbage in, garbage out" factors, and it's easy to kid yourself into thinking that you're doing something worthwhile just because you're staying so busy and you have so many compounds.

There's another part of this announcement that worries me a bit, though. Try this on for size:

Major pharmaceutical companies have more experience with high-throughput screening than do most academic institutes. Yet companies often limit tests of their closely held candidate chemicals to a fraction of potential disease targets. By pooling chemical libraries and screening against a more diverse set of targets—and identifying more molecular interactions—both academics and pharmaceutical companies stand to gain, says Hugh Laverty, an IMI project manager.

Well, sure, as I said above, Screening Is Good, when it's done right, and we do indeed stand to learn things we didn't know before. But is it really true that we in the industry only look at a "fraction of potential disease targets"? This sounds like someone who's keen to go after a lot of the tough ones; the protein-protein interactions, protein-nucleic acid interactions, and even further afield. Actually, I'd encourage these people to go for it - but with eyes open and brain engaged. The reason that we don't screen against such things as often is that hit rates tend to be very, very low, and even those are full of false positives and noise. In fact, for many of these things, "very, very low" is not distinguishable from "zero". Of course, in theory you just need one good hit, which is why I'm still encouraging people to take a crack. But you should do so knowing the odds, and be ready to give your results some serious scrutiny. If you think that there must be thousands of great things out there that the drug companies are just too lazy (or blinded by the thought of quick profits elsewhere) to pursue, you're not thinking this through well enough.

You might say that what these efforts are looking for are tool compounds, not drug candidates. And I think that's fine; tool compounds are valuable. But if you read that news link in the first paragraph, you'll see that they're already talking about how to manage milestone payments and the like. That makes me think that someone, at any rate, is imagining finding valuable drug candidates from this effort. The problem with that is that if you're screening all the thousands of drug targets that the companies are ignoring, you're by definition working with targets that aren't very validated. So any hits that you do find (and there may not be many, as said above) will still be against something that has a lot of work yet to be done on it. It's a bit early to be wondering how to distribute the cash rewards.

And if you're screening against validated targets, the set of those that don't have any good chemical matter against them already is smaller (and it's smaller for a reason). It's not that there aren't any, though: I'd nominate PTP1B as a well-defined enzymatic target that's just waiting for a good inhibitor to come along to see if it performs as well in humans as it does in, say, knockout mice. (It's both a metabolic target and a potential cancer target as well). Various compounds have been advanced over the years, but it's safe to say that they've been (for the most part) quite ugly and not as selective as they could have been. People are still whacking away at the target.

So any insight into decent-looking selective phosphatase inhibitors would be most welcome. And most unlikely, damn it all, but all great drug ideas are most unlikely. The people putting this initiative together will have a lot to balance.

Comments (20) + TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Drug Assays

March 30, 2012

Ciliobrevins: Digging Into Cell Biology

Email This Entry

Posted by Derek

Back in 2009 I wrote about a paper that found a number of small (and ugly) molecules which affected the Hedgehog signaling pathway. At the time, I asked if anyone had done any selectivity studies with them, or looked for any SAR around them, because they didn't look very promising to me.

I'm glad to report that there's a follow-up from the same lab, and it's a good one. They've spent the last two years chasing these things down, and it appears that one series (the HPI-4 compound in that first link, which is open-access) really does have a specific molecular target (dynein).

There are a number of good experiments in the paper showing how they narrowed that down, and the whole thing is a good example of just how granular cellular biology can get: this pathway out of thousands, that particular part of the process, which turns out to be this protein because of the way it interacts in defined ways with a dozen others, and moreover, this particular binding site on that one protein. It's worth reading to see how they chased all this down, but I'll take you right to the ending and say that it's the ATP-binding site on dynein that looks like the target.

Collectively, these results indicate that ciliobrevins are specific, reversible inhibitors of disparate cytoplasmic dynein-dependent processes. Ciliobrevins do not perturb cellular mechanisms that are independent of dynein function, including actin cytoskeleton organization and the mitogen-activated protein kinase and phosphoinositol-3-kinase signalling pathways. . .The compounds do not broadly target members of the AAA+ ATPase family either, as they have no effect on p97-dependent degradation of endoplasmic-reticulum-associated proteins or Mcm2–7-mediated DNA unwinding. . .Our studies establish ciliobrevins as the first small molecules known specifically to inhibit cytoplasmic dynein in vitro and in live cells.

So congratulations to everyone involved, at Stanford, Rockefeller, and Northwestern. These ciliobrevins are perfect examples of tool compounds. This is how academic science is supposed to work, and now we can perhaps learn things about dynein that no one has been able to learn yet, and that will be knowledge that no one can take away once we've learned it.

Comments (15) + TrackBacks (0) | Category: Biological News

March 23, 2012

The Ultimate in Personalized Medicine

Email This Entry

Posted by Derek

I wanted to mention this news, since it's really the most wildly advanced form of "personalized medicine" that the world has yet seen. As detailed in this paper, Stanford professor Michael Snyder spent months taking multiple, powerful, wide-ranging looks at his own biochemistry: genomic sequences, metabolite levels, microRNAs, gene transcripts, pretty much the whole expensive high-tech kitchen sink. No one's ever done this to one person over an extended period - heck, until the last few years, no one's ever been able to do this - so Snyder and the team were interested to see what might come up. A number of odd things did:

Snyder had a cold at the first blood draw, which allowed the researchers to track how a rhinovirus infection alters the human body in perhaps more detail than ever before. The initial sequencing of his genome had also showed that he had an increased risk for type 2 diabetes, but he initially paid that little heed because he did not know anyone in his family who had had the disease and he himself was not overweight. Still he and his team decided to closely monitor biomarkers associated with the diabetes, including insulin and glucose pathways. The scientist later became infected with respiratory syncytial virus, and his group saw that a sharp rise in glucose levels followed almost immediately. "We weren't expecting that," Snyder says. "I went to get a very fancy glucose metabolism test at Stanford and the woman looked at me and said, 'There's no way you have diabetes.' I said, 'I know that's true, but my genome says something funny here.' "

A physician later diagnosed Snyder with type 2 diabetes, leading him to change his diet and increase his exercise. It took 6 months for his glucose levels to return to normal. "My interpretation of this, which is not unreasonable, is that my genome has me predisposed to diabetes and the viral infection triggered it," says Snyder, who acknowledges that no known link currently exists between type 2 diabetes and infection.

There may well be a link, but it may well also only be in Michael Snyder. Or perhaps in him and the (x) per cent of the population that share certain particular metabolic and genomic alignments with him. Since this is an N of 1 experiment if ever there was one, we really have no idea. It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.

But all of these technologies are getting cheaper. It's not around the corner, but I can imagine a day when people have continuous blood monitoring of this sort, a constant metabolic/genomic watchdog application that lets you know how things are going in there. Keep in mind, though, that I have a very lively imagination. I don't expect this (for better or worse) in my own lifetime. The very first explorers are just hacking their way into thickets of biochemistry larger and more tangled than the Amazon jungle - it's going to be a while before the shuttle vans start running.

Comments (26) + TrackBacks (0) | Category: Biological News

January 27, 2012

Roche Goes Hostile for Illumina

Email This Entry

Posted by Derek

Roche is not only a big drug company, it's a big diagnostics company. And that's what's driving their unsolicited bid for Illumina, a gene-sequencing company from San Diego. Illumina has been one of the big players in the "How quickly and cheaply can we sequence a person's entire genome" game, and apparently Roche believes that there's something in it for them.

But as that Reuters link above shows, a lot of other people don't agree, and would rather partner than acquire (Chris Viehbacher, CEO of Sanofi, seems to have been waiting for the opportunity to unburden himself of thoughts to that effect). He may well be right. Sequencing has been a can-you-top-this field for some time, and I don't think that the process is finished yet. What if you buy a technology that's superseded before it has the time to pay off? What if the market for sequencing doesn't get as large, as quickly, as you're hoping? Those were Illumina's worries, and now they're going to be Roche's; you can't buy the promise without buying those, too.

Matthew Herper at Forbes is having very similar thoughts, and points out that Roche has done this sort of thing before. For now, we'll see what Illumina might be able to come up with to avoid being Roched.

Comments (12) + TrackBacks (0) | Category: Biological News | Business and Markets

January 18, 2012

Fun With Epigenetics

Email This Entry

Posted by Derek

If you've been looking around the literature over the last couple of years, you'll have seen an awful lot of excitement about epigenetic mechanisms. (Here's a whole book on that very subject, for the hard core). Just do a Google search with "epigenetic" and "drug discovery" in it, any combination you like, and then stand back. Articles, reviews, conferences, vendors, journals, startups - it's all there.

Epigenetics refers to the various paths - and there are a bunch of them - to modify gene expression downstream of just the plain ol' DNA sequence. A lot of these are, as you'd imagine, involved in the way that the DNA itself is wound (and unwound) for expression. So you see enzymes that add and remove various switches to the outside of various histone proteins. You have histone acyltransferases (HATs) and histone deacetylases (HDACs), methyltransferases and demethylases, and so on. Then there are bromodomains (the binding sites for those acetylated histones) and several other mechanisms, all of which add up to plenty o' drug targets.

Or do they? There are HDAC compounds out there in oncology, to be sure, and oncology is where a lot of these other mechanisms are being looked at most intensively. You've got a good chance of finding aberrant protein expression levels in cancer cells, you have a lot of unmet medical need, a lot of potential different patient populations, and a greater tolerance for side effects. All of that argues for cancer as a proving ground, although it's certainly not the last word. But in any therapeutic area, people are going to have to wrestle with a lot of other issues.

Just looking over the literature can make you both enthusiastic and wary. There's an awful lot of regulatory machinery in this area, and it's for sure that it isn't there for jollies. (You'd imagine that selection pressure would operate pretty ruthlessly at the level of gene expression). And there are, of course, an awful lot of different genes whose expression has to be regulated, at different levels, in different cell types, at different phases of their development, and in response to different environmental signals. We don't understand a whole heck of a lot of the details.

So I think that there will be epigenetic drugs coming out of this burst of effort, but I don't think that they're going to exactly be the most rationally designed things we've ever seen. That's fine - we'll take drug candidates where we can get them. But as for when we're actually going to understand all these gene regulation pathways, well. . .

Comments (15) + TrackBacks (0) | Category: Biological News | Cancer | Drug Development

January 17, 2012

Warp Drive Bio: Best Name or Worst?

Email This Entry

Posted by Derek

There are small drug firms and there are small drug firms - if you know what I mean. Which category is Warp Drive Bio going to fall into?

If you've never heard of them - and that name is rather memorable - then don't worry, they're new. Its founders are big names on the industry/academic drug discovery border: Greg Verdine, Jim Wells, and George Church. Here's the rundown:

Warp Drive Bio is driving the reemergence of natural products in the era of genomics to create breakthrough treatments that make an important difference in the lives of patients. Built upon the belief that nature is the world's most powerful medicinal chemist, Warp Drive Bio is deploying a battery of state-of-the-art technologies to access powerful drugs that are now hidden within microbes. Key to the Warp Drive Bio approach is the company's proprietary "genomic search engine" and customized search queries that enable hidden natural products to be revealed on the basis of their distinctive genomic signature.

Interestingly, they launched with a deal with Sanofi already in place. I've been hearing about cryptic natural products for a while, and while I haven't seen anything that's knocked me over, it's not prima facie a crazy idea. But it is going to be a tricky one to get to work, I'd think. After all, if these natural products were so active and useful, might they not have a bit higher profile, genomically and metabolically? I'm willing to be convinced otherwise by some data; perhaps we'll see some as the Sanofi collaboration goes on. Anyone with more knowledge in this area, please add it in the comments - maybe we can all learn something.

One other question: with Verdine founding another high-profile company, does this say something about how his last one, Aileron, is doing in the "stapled peptide" business? Or not?

Comments (20) + TrackBacks (0) | Category: Biological News | Business and Markets

January 4, 2012

Osiris And Their Stem Cells

Email This Entry

Posted by Derek

The topic of whether stem-cell therapies are overhyped - OK, let me show my cards, the topic of just how overhyped they are - last came up around here in November, when Geron announced that they were getting out of the business. And yesterday had a good example of why people tend to hold their noses and fan away the fumes whenever a company press-releases something in this area.

I'm talking about Osiris Therapeutics, who have been working for some time on a possible stem cell therapy (called Prochymal) for Type I diabetes. That's certainly not a crazy idea, although it is an ambitious one - after all, you get Type I when your insulin-producing cells die off, so why not replace them? Mind you, we're not quite sure why your insulin-producing cells die off in the first place, so there's room to wonder if the newly grown replacements, if they could be induced to exist, might not suffer a similar fate. But that's medical research, and we're not going to figure these things out without trying them.

This latest work, though, does not look fit to advance anyone's understanding of diabetes or of stem cells, although it might help advance ones understanding of human nature and of the less attractive parts of the stock market. Osiris, you see, issued a press release yesterday (courtesy of FierceBiotech) on the one-year interim analysis of their trial. The short form: they have nothing so far. The release goes on for a bit about how well-tolerated the stem-cell therapy is, but unfortunately, one reason for that clean profile might be that nothing is happening at all. No disease markers for diabetes have improved, although they say that there is a trend towards fewer hypoglycemic events. (I think it's irresponsible to talk about "trends" of this sort in a press release, but such a policy would leave many companies without much to talk about at all).

It's only when you look at Osiris and their history that you really start to understand what's going on. You see, this isn't Prochymal's first spin around the track. As Adam Feuerstein has been chronicling, the company has tried this stem cell preparation against a number of other conditions, and it's basically shown the same thing every time: no adverse effects, and no real positive ones, either. Graft-versus-host disease, cardiac events, cartilage repair, Crohn's disease - nothing happens, except press releases. You'd never know anything about this history if you just came across the latest one, though. The company's web site isn't a lot of help, either: you'd think that Prochymal is advancing on all fronts, when (from what I can see) it's not going much of anywhere.

So if you're looking for a reason to hold on to your wallet when the phrase "stem cell therapy" comes up, look no further. The thing is, some stem cell ideas are eventually going to work - you'd think - and when they do, they're going to be very interesting indeed. You'd think. But are any of the real successes going to come out of fishing expeditions like this? You don't want your clinical research program to be so hard to distinguish from a dose-and-hope-and-sell-some-stock strategy - do you?

Comments (13) + TrackBacks (0) | Category: Biological News | Business and Markets

November 17, 2011

Brain Cells: Different From Each Other, But Similar to Something Else?

Email This Entry

Posted by Derek

Just how different is one brain cell from another? I mean, every cell in our body has the same genome, so the differences in type (various neurons, glial cells) must be due to expression during development. And the differences between individual members of a class must be all due to local environment and growth - right?

Maybe not. I wasn't aware of this myself, but there's a growing body of evidence that suggests that neurons might actually differ more at the genomic level than you'd imagine. A lot of this work has come from the McConnell lab at the Salk Institute, where they've been showing that mouse precursor cells can develop into neurons with various chromosomal changes along the way. And instead of a defect (or an experimental artifact), he's hypothesized that this is a normal feature that helps to form the huge neuronal diversity seen in brain tissue.

His latest work used induced pluripotent cells transformed into neurons. Taking these cells from two different people, he found that the resulting neurons had highly variable sequences, with all sorts of insertions, deletions, and transpositions. (The precursor cells had some, too, but different ones, suggesting that the neural cell changes happened along the way). And this recent paper suggests that neurons have an unusual number of transposons in their DNA, which fits right in with McConnell's results.

The implication is that human brains are mosaics of mosaics, at the cell and sequence levels. And that immediately makes you wonder if these processes are involved in disease states (hard to imagine how they wouldn't be). The problem is, it's not too easy to get ahold of well-matched and well-controlled human brain tissue samples to check these ideas. But that's the obvious next step - take several similar-looking neurons and sequence them all the way. Obvious, but very difficult: single-cell sequencing is not so easy, to start with, and how exactly do you grab those single neurons out of the tangle of nerve tissue to sequence them? Someone's going to do this, but it's going to be a chore. (Note: McConnell's group was able to do the pluripotent-cell-derived stuff a bit more easily, since those come out clonal and give you more to work with).

Now, the idea that neurons are taking advantage of chromosomal instability to this degree is a little unnerving. That's because when you think of chromosomal instability, you think of cancer cells (See also the link in that last paragraph. It's interesting, as an aside, to see that those last two are to posts from this blog in 2002 - next year will mark ten years of this stuff! And I also enjoy seeing my remark from back then about "With headlines like this, I can't think why I'm not pulling in thousands of hits a day", since these days I'm running close to 20K/day as it is).

So, on some level, are our brains akin to tumor tissue? You really wonder why brain cancer isn't more common than it is, if these theories are correct. There may well be ways to get "controlled chromosomal instability", though, as opposed to the wild-and-woolly kind, but even the controlled kind is a bit scary. And all this makes me think of a passage from an old science fiction story by James Blish, "This Earth of Hours". The Earthmen have encountered a bizarre civilization that seems to involve many of the star systems toward the interior of the galaxy, and a captured human has informed them that these aliens apparently have no brains per se:

"No brains," the man from the Assam Dragon insisted. "Just lots of ganglia. I gather that's the way all of the races of the Central Empire are organized, regardless of other physical differences. That's what they mean when they say we're all sick - hadn't you realized that?"

"No," 12-Upjohn said in slowly dawning horror. "You had better spell it out."

"Why, they say that's why we get cancer. They say that the brain is the ultimate source of all tumors, and is itself a tumor. They call it 'hostile symbiosis.' "

"Malignant?"

"In the long run. Races that develop them kill themselves off. Something to do with solar radiation; animals on planets of Population II stars develop them, Population I planets don't."

The things you pick up reading 1950s science fiction. Blish, by the way, was an odd sort. He had a biology degree, and a liking for James Joyce, Oswald Spengler, and Richard Strauss. All of these things worked their ways into his stories, which were often much better and more complex than they strictly needed to be. Here's a PDF of "This Earth of Hours", if you're interested - it's not a perfect transcription, though; you'll have to take my word for it that the original has no grammatical errors. It's a good illustration of Blish's style - what appears at first to be a pulpy space-war story turns out to have a lot of odd background dropped into it, along with speculations like the above. And for someone who didn't always write a lot of descriptive prose, preferring to let philosophical points drive his plots, I find Blish's stories strangely vivid, particularly the relatively actionless ones like "Beep" or "Common Time". He's pretty thoroughly out of print these days, but you can find the paperbacks used, and in many cases as e-books. Now if you're looking for someone who always lets philosophical points drive his stores, then you'll be wanting some Borges. (As it happens, I've had occasion to discuss that particular translation with an Argentine co-worker. But this is not a literary blog, not for the most part, so I'll stop there!)

Comments (28) + TrackBacks (0) | Category: Biological News | Book Recommendations | Cancer | The Central Nervous System

November 16, 2011

Proteins in a Living Cell

Email This Entry

Posted by Derek

It's messy inside a cell. The closer we look, the more seems to be going on. And now there's a closer look than ever at the state of proteins inside a common human cell line, and it does nothing but increase your appreciation for the whole process.

The authors have run one of these experiments that (in the days before automated mass spec techniques and huge computational power) would have been written off as a proposal from an unbalanced mind. They took cultured human U2OS cells, lysed them to release their contents, and digested those with trypsin. This gave, naturally, an extremely complex mass of smaller peptides, but these, the lot of them, were fractionated out and run through the mass spec machines, with use of ion-trapping techniques and mass-label spiking to get quantification. The whole process is reminiscent of solving a huge jigsaw puzzle by first running it through a food processor. The techniques for dealing with such massive piles of mass spec/protein sequence data, though, have improved to the point where this sort of experiment can now be carried out, although that's not to say that it isn't still a ferocious amount of work.

What did they find? These cells are expressing on the order of at least ten thousand different proteins (well above the numbers found in previous attempts at such quantification). Even with that, the authors have surely undercounted membrane-bound proteins, which weren't as available to their experimental technique, but they believe that they've gotten a pretty good read of the soluble parts. And these proteins turn out to expressed over a huge dynamic range, from a few dozen copies (or less) per cell up to tens of millions of copies.

As you'd figure, those copy numbers represent very different sorts of proteins. It appears, broadly, that signaling and regulatory functions are carried out by a host of low-expression proteins, while the basic machinery of the cell is made of hugely well-populated classes. Transcription, translation, metabolism, and transport are where most of the effort seems to be going - in fact, the most abundant proteins are there to deal with the synthesis and processing of proteins. There's a lot of overhead, in other words - it's like a rocket, in which a good part of the fuel has to be there in order to lift the fuel.

So that means that most of our favored drug targets are actually of quite low abundance - kinases, proteases, hydrolases of all sorts, receptors (most likely), and so on. We like to aim for regulatory choke points and bottlenecks, and these are just not common proteins - they don't need to be. In general (and this also makes sense) the proteins that have a large number of homologs and family members tend to show low copy numbers per variant. Ribosomal machinery, on the other hand - boy, is there a lot of ribosomal stuff. But unless it's bacterial ribosomes, that's not exactly a productive drug target, is it?

It's hard to picture what it's like inside a cell, and these numbers just make it look even stranger. What's strangest of all, perhaps, is that we can get small-molecule drugs to work under these conditions. . .

Comments (22) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

November 15, 2011

Geron, Stem-Cell Pioneers, Drop Stem Cells

Email This Entry

Posted by Derek

Are stem cells overhyped? That topic has come up around here several times. But there have been headlines and more headlines, and breathless reports of advances, some of which might be working out, and many of which are never heard from again. (This review, just out today, attempts to separate reality from hype).

Today brings a bit of disturbing news. Geron, a company long associated with stem cell research, the company that started the first US trial of embryonic stem cell therapy, has announced that they're exiting the field. Now, a lot of of this is sheer finances. They have a couple of oncology drugs in the clinic, and they need all the cash they have to try to get them through. But still, you wonder - if their stem cell trial had been going really well, wouldn't the company have gotten a lot more favorable publicity and opportunities for financing by announcing that? As things stand, we don't know anything about the results at all; Geron is looking for someone to take over the whole program.

As it happens, there's another stem-cell report today, from a study in the Lancet of work that was just presented at the AHA. This one involves injecting heart attack patients with cultured doses of their own cardiac stem cells, and it does seem to have helped. It's a good result, done in a well-controlled study, and could lead to something very useful. But we still have to see if the gains continue, what the side effects might be, whether there's any advantage to doing this over other cell-based therapies, and so on. That'll take a while, although this looks to be on the right track. But the headlines, as usual, are way out in front of what's really happening.

No, I continue to think that stem cells are a very worthy subject of research. But years, quite a few years, are going to be needed before treatments using them can become a reality. Oh, and billions of dollars, too - let's not forget that. . .

Comments (12) + TrackBacks (0) | Category: Biological News | Business and Markets | Cancer | Cardiovascular Disease | Press Coverage

October 18, 2011

Cyclodextrin's Day in the Sun

Email This Entry

Posted by Derek

Under the "Who'da thought?" category, put this news about cyclodextrin. For those outside the field, that's a ring of glucose molecules, strung end to end like a necklace. (Three-dimensionally, it's a lot more like a thick-cut onion ring - see that link for a picture). The most common form, beta-cyclodextrin, has seven glucoses. That structure gives it some interesting properties - the polar hydroxy groups are mostly around the edges and outside surface, while the inside is more friendly to less water-soluble molecules. It's a longtime additive in drug formulations for just that purpose - there are many, many examples known of molecules that fit into the middle of a cyclodextrin in aqueous solution.

But as this story at the Wall Street Journal shows, it's not inert. A group studying possible therapies for Niemann-Pick C disease (a defect in cholesterol storage and handling) was going about this the usual way - one group of animals was getting the proposed therapy, while the other was just getting the drug vehicle. But this time, the vehicle group showed equivalent improvement to the drug-treatment group.

Now, most of the time that happens when neither of them worked; that'll give you equivalence all right. But in this case, both groups showed real improvement. Further study showed that the cyclodextrin derivative used in the dosing vehicle was the active agent. And that's doubly surprising, since one of the big effects seen was on cholesterol accumulation in the central neurons of the rodents. It's hard to imagine that a molecule as big (and as polar-surfaced) as cyclodextrin could cross into the brain, but it's also hard to see how you could have these effects without that happening. It's still an open question - see that PLoS One paper link for a series of hypotheses. One way or another, this will provide a lot of leads and new understanding in this field:

Although the means by which CD exerts its beneficial effects in NPC disease are not understood, the outcome of CD treatment is clearly remarkable. It leads to delay in onset of clinical signs, a significant increase in lifespan, a reduction in cholesterol and ganglioside accumulation in neurons, reduced neurodegeneration, and normalization of markers for both autophagy and neuro-inflammation. Understanding the mechanism of action for CD will not only provide key insights into the cholesterol and GSL dysregulatory events in NPC disease and related disorders, but may also lead to a better understanding of homeostatic regulation of these molecules within normal neurons. Furthermore, elucidating the role of CD in amelioration of NPC disease will likely assist in development of new therapeutic options for this and other fatal lysosomal disorders.

Meanwhile, the key role of cholesterol in the envelope of HIV has led to the use of cyclodextrin as a possible antiretroviral. This looks like a very fortunate intersection of a wide-ranging, important biomolecule (cholesterol) with a widely studied, well-tolerated complexing agent for it (cyclodextrin). It'll be fun to watch how all this plays out. . .

Comments (16) + TrackBacks (0) | Category: Biological News | Infectious Diseases | The Central Nervous System | Toxicology

September 22, 2011

The Latest Sirtuin Controversy

Email This Entry

Posted by Derek

As promised, today we have a look at a possible bombshell in longevity research and sirtuins. Again. This field is going to make a pretty interesting book at some point, but it's one that I'd wait a while to start writing, because the dust is hanging around pretty thickly.

Some background: in 1999, Sir2 the Guarente lab at MIT reported that Sir2 was a longevity gene in yeast. In 2001, theyextended Sir2 these results to C. elegans nematodes, lengthening their lifespan between 15 and 50% by overexpressing the gene. And in 2004, Stephen Helfand's lab at Brown reported similar results in Drosophila fruit flies. Since then, the sirtuin field has been the subject of more publications than anyone would care to count. The sirtuins are involved, it turns out, in regulating histone acetylation, which regulates gene expression, so there aren't many possible effects they might have that you can rule out. Like many longevity-associated pathways, they seem to be tied up somehow with energy homeostasis and response to nutrients, and one of the main hypotheses has been that they're somehow involved in the (by now irrefutable) life-extending effects of caloric restriction.

As an aside, you may have noticed that almost every news about something that extends life gets tied to caloric restriction somehow. There are two good reasons for that - one is, as stated, that a lot of longevity seems - reasonably enough - to be linked to metabolism, and the other one is that caloric restriction is by far the most solid of all the longevity effects that can be shown in animal models.

I'd say that the whole sirtuin story has split into two huge arguments: (1) arguments about the sirtuin genes and enzymes themselves, and (2) arguments about the compounds used to investigate them, starting with resveratrol and going through the various sirtuin activators reported by Sirtris, both before and after their (costly) acquisition by GlaxoSmithKline. That division gets a bit blurry, since it's often those compounds that have been used to try to unravel the roles of the sirtuin enzymes, but there are ways to separate the controversies.

I've followed the twists and turns of argument #2, and it has had plenty of those. It's not safe to summarize, but if I had to, I'd say that the closest thing to a current consensus is that (1) resveratrol is a completely unsuitable molecule as an example of a clean sirtuin activator, (2) the earlier literature on sirtuin activation assays is now superseded, because of some fundamental problems with the assay techniques, and (3) agreement has not been reached on what compounds are suitable sirtuin activators, and what their effects are in vivo. It's a mess, in other words.

But what about argument #1, the more fundamental one about what sirtuins are in the first place? That's what these latest results address, and boy, do they ever not clear things up. There has been persistent talk in the field that the original model-organism life extension effects were difficult to reproduce, and now two groups (those of David Gems and Linda Partridge) at University College, London (whose labs I most likely walked past last week) have re-examined these. They find, on close inspection, that they cannot reproduce them. The effects in the LG100 strain of C. elegans appear to be due to another background mutation in the dyf family, which is also known to have effects on lifespan. Another mutant strain, NL3909, shows a similar problem: its lifespan decreases on outcrossing, although the Sir2 levels remain high. A third long-lived strain, DR1786, has a duplicated section of its genome that includes Sir2, but knocking that down with RNA interference has no effect on its lifespan. Taken together, the authors say, the correlation of Sir2 with lifespan in nematodes appears to be an artifact.

How about the fruit flies? This latest paper reproduces the lifespan effects, but finds that they seem to be due to the expression system that was used to increase dSir2 levels. When the same system is used to overexpress other genes, lifespan is also increased. They then used another expression vector to crank up the fly Sir2 by over 300%, but those flies did not show an extension in lifespan, even under a range of different feeding conditions. They also went the other way, examining mutants with their sirtuin expression knocked down by a deletion in the gene. Those flies show no different response to caloric restriction, indicating that Sir2 isn't part of that effect, either - in direct contrast to the effects reported in 2004 by Helfand.

It's important to keep in mind that these aren't the first results of this kind. Others had reported problems with sirtuin effects on lifespan (or sirtuin ties to caloric restriction effects) in yeast, and as mentioned, this had been the stuff of talk in the field for some time. But now it's all out on the table, a direct challenge.

So how are the original authors taking it? Guarente, who to his credit has been right out in the spotlight throughout the whole story, has a new paper of his own, published alongside the UCL results. They partially agree, saying that there does indeed appear to be an unlinked mutation in the LG100 strain that's affecting lifespan. But they disagree that sirtuin overexpression has no effect. Instead of their earlier figure of 15 to 50%, they're claiming a 10 to 14% - not as dramatic, for sure, but the key part for the argument is that it's not zero.

And as for the fruit flies, Hefland at Brown is pointing out that in 2009, his group reported a totally different expression system to increase dSir2, which also showed longevity effects (see their Figure 2 in that link). This work, he's noting, is not cited in the new UCL paper, and from his tone in interviews, he's not too happy about that. That's leading to coverage from the "scientific feud!" angle - and it's not that I think that's inaccurate, but it's not the most important part of the story. (Another story with follow-up quotes is here).

So what are the most important parts? I'd nominate these:

1. Are sirtuins involved in lifespan extension, or not? And by that, I mean not only in model organisms, but are they subject to pharmacological intervention in the field of human aging?

2. What are the other effects of sirtuins, outside of aging? Diabetes, cancer, several other important areas touch on this whole metabolic regulation question: what are the effects of sirtuins in these?

3. What is the state of our suite of tools to answer these questions? Resveratrol may or may not do interesting things in humans or other organisms, but it's not a suitable tool compound to unravel the basic mechanisms. Do we have such compounds, from the reported Sirtris chemical matter or from other sources? And on the biology side, how useful are the reported overexpression and deletion strains of the various model organisms, and how confident are we about drawing conclusions from their behavior?

4. Getting more specific to drug discovery, are sirtuin regulator compounds drug candidates or not? Given the disarray in the basic biology, they're at the very least quite speculative. GlaxoSmithKline is the company most immediately concerned with this question, since they spent over $700 million to buy Sirtris, and have been spending money in the clinic ever since evaluating their more advanced chemical matter. And that brings up the last question. . .

5. What does GSK think of that deal now? Did they jump into an area of speculative biology too quickly? Or did they make a bold deal that put them out ahead in an important field?

I do not, of course, have answers to any of these. But the fact that we're still asking these questions ten years after the sirtuin story started tells you that this is both an important and interesting area, and a tricky one to understand.

Comments (32) + TrackBacks (0) | Category: Aging and Lifespan | Biological News

September 21, 2011

Big Sirtuin News

Email This Entry

Posted by Derek

This will be the subject of a longer post tomorrow, but I wanted to alert people to some breaking news in the sirtuin/longevity saga. It now appears that the original 2001 report of longevity effects of Sir2 in the C. elegans model, which was the starting gun of the whole story, is largely incorrect. That would help to explain the conflicting results in this area, wouldn't it? Topics for discussion in tomorrow's post will include, but not be limited to: what else do sirtuins do? Are those results reproducible? What can we now expect to come out of pharma research in the field? And what does GSK now think about its investment in Sirtris?

Comments (14) + TrackBacks (0) | Category: Aging and Lifespan | Biological News

August 2, 2011

Merck, RNAi, Alnylam, And So On

Email This Entry

Posted by Derek

And while we're on the topic of Merck, I note that they're closing their RNAi facility in Mission Bay, the former Sirna. That was a pretty big deal when it took place, wasn't it? The piece linked to in that earlier post also talks about the investment that Merck was making in the very facility that they're now closing down, but if I got paid every time that sort of thing happened in this industry, I wouldn't have to work.

This isn't going to help the Bay Area biotech/pharma environment, nor the atmosphere around RNA interference as a drug platform. Merck says that they're not getting out of the field, and that they've integrated the technology for use in their drug discovery efforts. But they paid a billion dollars for Sirna, which is not the sort of up-front price you generally see for add-on technologies that can help you discover other drugs. At the time, it looked like Merck was hoping directly for some new therapeutics, and we still don't know when (or if) those will emerge.

There's another player in the field right next door to me here in Cambridge, Alnylam. Not long after I last wrote about the state of the RNAi area, they actually invited me over to talk about what they're up to - a bit unusual, since I'm not just a blogger, but a scientist working at another company, which is a combo that's caused some confusion more than once. But they gave me a nice overview of what they're working on, and it was clear that they understand the risks involved and are doing whatever they can to get something that works out the door. They have several approaches to the drug-delivery problem that besets the RNA world, and are taking good shots in several different disease areas.

But they (and the other RNAi shops) need more money to go on, which in this environment means partnering with a larger company. Merck, Roche, and Novartis have (in various ways) shown that they feel as if they have pretty much all the RNAi that they need for now, so it'll have to be someone else. Maybe AZ or Lilly, the companies with the biggest patent-expiration problems?

Comments (16) + TrackBacks (0) | Category: Biological News | Business and Markets

July 27, 2011

Bait And Switch For Type B GPCRs

Email This Entry

Posted by Derek

You hear often about how many marketed drugs target G-protein coupled receptors (GPCRs). And it's true, but not all GPCRs are created equal. There's a family of them (the Class B receptors) that has a number of important drug targets in it, but getting small-molecule drugs to hit them has been a real chore. There's Glucagon, CRF, GHRH, GLP-1, PACAP and plenty more, but they all recognize good-sized peptides as ligands, not friendly little small molecules. Drug-sized things have been found that affect a few of these receptors, but it has not been easy, and pretty much all of them have been antagonists. (That makes sense, because it's almost always easier to block some binding event rather than hitting the switch just the right way to turn a receptor on).

That peptide-to-receptor binding also means that we don't know nearly as much about what's going on in the receptor as we do for the small-molecule GPCRs, either (and there are still plenty of mysteries around even those). The generally accepted model is a two-step process: there's an extra section of the receptor protein that sticks out and recognizes the C-terminal end of the peptide ligand first. Once that's bound, the N-terminal part of the peptide ligand binds into the seven-transmembrane-domain part of the receptor. The first part of that process is a lot more well-worked-out than the second.

Now a German team has reported an interesting approach that might help to clear some things up. They synthesized a C-terminal peptide that was expected to bind to the extracellular domain of the CRF receptor, and made it with an azide coming off its N-terminal end. (Many of you will now have guessed where this is going!) Then they took a weak peptide agonist piece and decorated its end with an acetylene. Doing the triazole-forming "click" reaction between the two gave a nanomolar agonist for the receptor, revving up the activity of the second peptide by at least 10,000x.

This confirms the general feeling that the middle parts of the peptide ligands in this class are just spacers to hold the two business ends together in the right places. But it's a lot easier to run the "click" reaction than it is to make long peptides, so you can mix and match pieces more quickly. That's what this group did next, settling on a 12-amino-acid sequence as their starting point for the agonist peptide and running variations on it.

Out of 89 successful couplings to the carrier protein, 70 of the new combinations lowered the activity (or got rid of it completely). 15 were about the same as the original sequence, but 11 of them were actually more potent. Combining those single-point changes into "greatest-hit" sequences led to some really potent compounds, down to picomolar levels. And by that time, they found that they could get rid of the tethered carrier protein part, ending up with a nanomolar agonist peptide that only does the GPCR-binding part and bypasses the extracellular domain completely. (Interestingly, this one had five non-natural amino acid substitutions).

Now that's a surprise. Part of the generally accepted model for binding had the receptor changing shape during that first extracellular binding event, but in the case of these new peptides, that's clearly not happening. These things are acting more like the small-molecule GPCR agonists and just going directly into the receptor to do their thing. The authors suggest that this "carrier-conjugate" approach should speed up screening of new ligands for the other receptors in this category, and should be adaptable to molecules that aren't peptides at all. That would be quite interesting indeed: leave the carrier on until you have enough potency to get rid of it.

Comments (3) + TrackBacks (0) | Category: Biological News | Chemical News | Drug Assays

July 6, 2011

A First Step Toward A New Form of Life

Email This Entry

Posted by Derek

There's been a real advance in the field of engineered "unnatural life", but it hasn't produced one-hundredth the headlines that the arsenic bacteria story did. This work is a lot more solid, although it's hard to summarize in a snappy way.

Everyone knows about the four bases of DNA (A, T, C, G). What this team has done is force bacteria to use a substitute for the T, thymine - 5-chlorouracil, which has a chlorine atom where thymine's methyl group is. From a med-chem perspective, that's a good switch. The two groups are about the same size, but they're different enough that the resulting compounds can have varying properties. And thymine is a good candidate for a swap, since it's not used in RNA, thus limiting the number of systems that have to change to accommodate the new base. (RNA, of course, uses uracil instead, the unsubstituted parent compound of both thymine and the 5-chloro derivative used here).

Over the years, chlorouracil has been studied in DNA for just that reason, and it's been found to make the proper base-pair hydrogen bonds, among other things. So incorporating it into living bacteria looks like an experiment in just the right spot - different enough to be a real challenge, but similar enough to be (probably) doable. People have taken a crack at similar experiments before, with mixed success. In the 1970s, mutant hamster cells were grown in the presence of the bromo analog, and apparently generated DNA which was strongly enriched with that unnatural base. But there were a number of other variables that complicated the experiment, and molecular biology techniques were in their infancy at the time. Then in 1992, a group tried replacing the thymine in E. coli with uracil, with multiple mutations that shut down the T-handling pathways. They got up to about 90% uracil in the DNA, but this stopped the bacteria from growing - they just seemed to be hanging on under those T-deprived conditions, but couldn't do much else. (In general, withholding thymine from bacterial cultures and other cells is a good way to kill them off).

This time, things were done in a more controlled manner. The feat was accomplished by good old evolutionary selection pressure, using an ingenious automated system. An E. coli strain was produced with several mutations in its thymine pathways to allow it to survive under near-thymine-starvation conditions. These bacteria were then grown in a chamber where their population density was being constantly measured (by turbidity). Every ten minutes a nutrient pulse went in: if the population density was above a set limit, the cells were given a fixed amount of chlorouracil solution to use. If the population had falled below a set level, the cells received a dose of thymine-containing solution to keep them alive. A key feature of the device was the use of two culture chambers, with the bacteria being periodically swapped from one to the other (which the first chamber undergoes sterilization with 5M sodium hydroxide!) That's to keep biofilm formation from giving the bacteria an escape route from the selection pressure, which is apparently just what they'll do, given the chance. One "culture machine" was set for a generation time of about two hours, and another for a 4-hour cycle (by cutting in half the nutrient amounts). This cycle selected for mutations that allowed the use of chlorouracil throughout the bacteria's biochemistry.

And that's what happened - the proportion of the chlorouracil solution that went in went up with time. The bacterial population had plenty of dramatic rises and dips, but the trend was clear. After 23 days, the experimenters cranked up the pressure - now the "rescue" solution was a lower concentration of thymine, mixed 1:1 with chlorouracil, and the other solution was a lower concentration of chlorouracil only. The proportion of the latter solution used still kept going up under these conditions as well. Both groups (the 2-hour cycle and the 4-hour cycle ones) were consuming only chlorouracil solution by the time the experiment went past 140 days or so.

Analysis of their DNA showed that it had incorporated about 90% chlorouracil in the place of thymine. The group identified a previously unknown pathway (U54 tRNA methyltransferase) that was bringing thymine back into the pathway, and disrupting this gene knocked the thymine content down to just above detection level (1.5%). Mass spec analysis of the DNA from these strains clearly showed the chlorouracil present in DNA fractions.

The resulting bacteria from each group, it turned out, could still grow on thymine, albeit with a lag time in their culture. If they were switched to thymine media and grown there, though, they could immediately make the transition back to growing on chlorouracil, which shows that their ability to do so was now coded in their genomes. (The re-thymined bacteria, by the way, could be assayed by mass spec as well for the disappearance of their chlorouracil).

These re-thymined bacteria were sequenced (since the chloruracil mutants wouldn't have matched up too well with sequencing technology!) and they showed over 1500 base substitutions. Interestingly, there were twice as many in the A-T to G-C direction as the opposite, which suggests that chlorouracil tends to mispair a bit with guanine. The four-hour-cycle strain had not only these sorts of base swaps, but also some whole chromosome rearrangements. As the authors put it, and boy are they right, "It would have been impossible to predict the genetic alterations underlying these adaptations from current biological knowledge. . ."

These bacteria are already way over to the side of all the life on Earth. But the next step would be to produce bacteria that have to live on chlorouracil and just ignore thymine. If that can be realized, the resulting organisms will be the first representatives of a new biology - no cellular life form has ever been discovered that completely switches out one of the DNA bases. These sorts of experiments open the door to organisms with expanded genetic codes, new and unnatural proteins and enzymes, and who knows what else besides. And they'll be essentially firewalled from all other living creatures.

Postscript: and yes, it's occurred to me as well that this sort of system would be a good way to evolve arsenate-using bacteria, if they do really exist. The problem (as it is with the current work) is getting truly phosphate-free media. But if you had such, and ran the experiment, I'd suggest isolating small samples along the way and starting them fresh in new apparatus, in order to keep the culture from living off the phosphate from previous generations. Trying to get rid of one organic molecule is hard enough; trying to clear out a whole element is a much harder proposition).

Comments (17) + TrackBacks (0) | Category: Biological News | Chemical Biology | Life As We (Don't) Know It

July 1, 2011

The Histamine Code, You Say?

Email This Entry

Posted by Derek

I've been meaning to link to John LaMattina's blog for some time now. He's a former R&D guy (and author of Drug Truths: Dispelling the Myths About Pharma R & D, which I reviewed here for Nature Chemistry), and he knows what he's talking about when it comes to med-chem and drug development.

Here he takes on the recent "Scientists Crack the Histamine Code" headlines that you may have seen this week. Do we have room, he wonders, for a third-generation antihistamine, or not?

Comments (17) + TrackBacks (0) | Category: Biological News | Drug Industry History

June 14, 2011

The Uses of Disorder

Email This Entry

Posted by Derek

We spend a lot of time thinking about proteins in this business - after all, they're the targets for almost every known drug. One of the puzzling things about them, though, is the question of just how orderly they are.

That's "order" as in "ordered structure". If you're used to seeing proteins in X-ray crystal structures, they appear quite orderly indeed, but that's an illusion. (In fact, to me, that's one of the biggest things to look out for when dealing with X-ray information - the need to remember that you're not seeing something that's built out of solid resin or metal bars. Those nice graphics are, even when they're right, just snapshots of something that can move around). Even in many X-ray studies, you can see some loops of proteins that just don't return useful electron density. They're "disordered". Sometimes, in the pictures, a structure will be put up in that region as a placeholder (and the crystallographers will tell you not to put much stock in it), and sometimes there will just be a blank region or some dotted lines. Either way, "disordered" means what it says - the protein in that region adopts and/or switches between a number of different conformations, with no clear preference for any of them.

And that makes sense for a big, floppy, loop that makes an excursion out from the ordered core of a protein. But how far can disorder extend? We have a tendency to think that the intrinsic state of a protein is a more or less orderly one, which we just refer to (if we do at all) as "folded". (You can divide that into two further classes - "properly folded" when the protein does what we want it to do, and "improperly folded" when it doesn't. There are a number of less polite synonyms for that latter state as well). Are all proteins so well folded, though?

It's becoming increasingly clear that the answer is no, they aren't. Here's a new paper in JACS that examines the crystallographic data and concludes that proteins cover the entire range, from almost completely ordered to almost completely disordered. When you consider that the more disordered ones are surely less likely to be represented in that data set, you have to conclude that there are probably a lot of them out there. Even the ones with relatively orderly regions can turn out to have important functions for their disordered parts. The study of these "intrinsically disordered proteins" (IDPs) has really taken off in the last few years. (Here's another paper on the subject that's also just out in JACS, to prove the point!)

So what's a disordered protein for? (Here's one of the key papers in the field that addresses this question). One such would have a number of conformations available to it inside a pretty small energy window, and this might permit it to have different functions, binding to rather different partners without having to do much energetically costly refolding. They could be useful for broad selectivity/low affinity situations and have faster on (or off) rates with their binding partners. (That second new JACS paper linked to above suggests that it's selection pressure on those rates that has given us so many disordered proteins in the first place). Interestingly, several of these IDPs have shown up with links to human disease, so we're going to have to deal with them somehow. Here's a recent attempt to come to grips with what structure they have; it's not an easy task. And it's not like figuring all this stuff out even for the ordered proteins is all that easy, either, but this is the world as we find it.

Comments (16) + TrackBacks (0) | Category: Biological News

June 8, 2011

Garage Biotech: The Book

Email This Entry

Posted by Derek

I haven't read it yet, but there's a new book on the whole "garage biotech" field, which I've blogged about hereand here. Biopunk looks to be a survey of the whole movement; I hope to go through it shortly.

I'm still on the "let a thousand flowers bloom" side of this issue, myself, but it's certainly not without its worries. But this is the world we've got - where these things are possible, and getting more possible all the time - and we're going to have to make the best of it. Trying to stuff it back down will, I think, only increase the proportion of harmful lunatics who try it.

By the way, since that's an Amazon link, I should note that I do get a cut from them whenever someone buys through a link on the site, and not just from the particular item ordered. I've never had a tip jar on the site, and I never plan to, but the Amazon affiliate program does provide some useful book-buying money around here at no cost to the readership.

Comments (10) + TrackBacks (0) | Category: Biological News | Book Recommendations

June 2, 2011

Biomarkers, Revisited. Unfortunately.

Email This Entry

Posted by Derek

Your genome - destiny, right? That's what some of us thought - every disease was going to have one or more associated genes, those genes would code for new drug targets, and we'd all have a great time picking them off one by one. It didn't work out that way, of course, but there are still all these papers out there in the literature, linking Gene A with the chances of getting Disease B. So how much are those worth?

While we're at it, everyone also wanted (and still wants) biomarkers of all kinds. Not just genes, but protein and metabolite levels in the blood or other tissue to predict disease risk or progression. I can't begin to estimate how much work has been going into biomarker research in this business - a good biomarker can clarify your clinical trial design, regulatory picture, and eventual marketing enormously - if you can find one. Plenty of them have been reported in the literature. How much are those worth, too?

Not a whole heck of a lot, honestly, according to a new paper in JAMA by John Ioannidis and Orestes Panagiotou. They looked at the disease marker highlights from the last 20 years or so, the 35 papers that had been cited at least 400 times. How good do the biomarkers in those papers have to be to be useful? An increase of 35% in the chance of getting the targeted condition? Sorry - only one-fifth of the them rise to that level, when you go back and see how they've held up in the real world.

Subsequent studies, in fact, very rarely show anything as strong as the original results - 29 of the 35 biomarkers show a less robust association after meta-analysis of all the follow-up reports, as compared to what was claimed at first. And those later studies tend to be larger and more powered - in only 3 cases was the highly cited study the largest one that had been run, and only twice did the largest study show a higher effect measure than the original highly cited one. Only 15 of the 35 biomarkers were nominally statistically significant in the largest studies of them.

Ioannidis has been hitting the literature's unreliability for some time now, and I think that it's hard to dispute his points. The first thought that any scientist should have when an interesting result is reported is "Great! Wonder if it's true?" There are a lot of reasons for things not to be (see that earlier post for a discussion of them), and we need to be aware of how often they operate.

Comments (25) + TrackBacks (0) | Category: Biological News | The Scientific Literature

May 19, 2011

Get Yer Telomeres Measured, Step Right Up

Email This Entry

Posted by Derek

Hmm. Remember when the Nobel Prize came out for telomere research? Now there are competing companies offering telomere-length screening, and one of them (Telome Sciences) was partly founded by Elizabeth Blackburn, one of the Nobel awardees. That isn't going down well with. . .one of the other awardees:

But among the critics of such tests is Carol Greider, a molecular biologist at Johns Hopkins University, who was a co-winner of the Nobel Prize with Dr. Blackburn.

Dr. Greider acknowledged that solid evidence showed that the 1 percent of people with the shortest telomeres were at an increased risk of certain diseases, particularly bone marrow failure and pulmonary fibrosis, a fatal scarring of the lungs. But outside of that 1 percent, she said, “The science really isn’t there to tell us what the consequences are of your telomere length.”

Dr. Greider said that there was great variability in telomere length. “A given telomere length can be from a 20-year-old or a 70-year-old,” she said. “You could send me a DNA sample and I couldn’t tell you how old that person is.”

Grieder is also a former student of Blackburn's, which makes things even messier. I can see why she's uneasy. Looking over the news accounts, there's an awful lot of noise and hype - all kinds of stuff about "Test Predicts How Long You'll Live!" and so on. The hype has been building for some time, though, and I'll bet that we're nowhere near the crest. As for me, I'm not rushing out to check my telomeres until I know what that means (and until I know if there's anything I can do about it).

Comments (19) + TrackBacks (0) | Category: Biological News | Business and Markets

February 18, 2011

Smell The Vibrations? Fruit Flies Might Be Able To. . .

Email This Entry

Posted by Derek

A few years ago, I wrote here about Luca Turin and his theory that our sense of smell is at least partly responsive to vibrational spectra. (Turin himself was the subject of this book, author of this one (which is quite interesting and entertaining for organic chemists), and co-author of Perfumes: The A-Z Guide, perhaps the first attempt to comprehensively review and categorize perfumes).

Turin's theory is not meant to overturn the usual theories of smell (which depend on shape and polarity as the molecules bind into olfactory receptors), but to extend them. He believes that there are anomalies in scent that can't be explained by the current model, and has been proposing experiments to test them. Now he and his collaborators have a new paper in PNAS with some very interesting data.

They're checking to see if Drosophila (fruit flies) can tell the difference between deuterated and non-deuterated compounds. The idea here is that the size and shape of the two forms are identical; there should be no way to smell the difference. But it appears that the flies can: they discriminate, in varying ways, between deuterated forms of acetophenone, octanol, and benzaldehyde. Deuterated acetophenone, for example, turns out to be aversive to fruit flies (whereas the normal form is attractive), and the aversive quality goes up as you move from d-3 to d-5 and d-8 forms of the isotopically labeled compound.

The flies could also be trained, by a conditioned avoidance protocol, to discriminate between all of the isotopic pairs. Most interestingly, if trained to avoid a particular normal or deutero form of one compound, they responded similarly when presented with a novel pair, which seems to indicate that they pick up a "deuterated" scent effect that overlays several chemical classes.

There's more to the paper; definitely read it if you're interested in this sort of thing. Reactions to it have been all over the place, from people who sound convinced to people who aren't buying any of it. If Turin is right, though, it may indeed be true that we're smelling the differences between C-H stretching vibrations, possibly through an electron tunneling mechanism, which is a rather weird thought. But then, it's a weird world.

Comments (35) + TrackBacks (0) | Category: Biological News | Chemical News

January 14, 2011

Fishing Around for Biomarkers

Email This Entry

Posted by Derek

Everyone in this industry wants to have good, predictive biomarkers for human diseases. We've wanted that for a very long time, though, and in most cases, we're still waiting. [For those outside the field, a biomarker is some sort of easy-to-run test that for a factor that correlates with the course of the real disease. Viral titer for an infection or cholesterol levels for atherosclerosis are two examples. The hope is to find a simple blood test that will give you advance news of how a slow-progressing disease is responding to treatment]. Sometimes the problem is that we have markers, but that no one can quite agree on how relevant they are (and for which patients), and other times we have nothing to work with at all.

A patient's antibodies might, in theory, be a good place to look for markers in many disease states, but that's some haystack to go rooting around in. Any given person is estimated, very roughly, to produce maybe ten billion different antibodies. And in many cases, we have no idea of what ones to look for since we don't really know what abnormal molecules they've been raised to recognize. (It's a chicken-and-egg problem: if we knew what those antigens were, we'd probably just look for them directly with reagents of our own).

So if you don't have a good starting point, what to do? One approach has been to go straight into tissue samples from patients and look for unusual molecules, in the belief that these might well be associated with the disease. (You can then do just as above to try to use them as a biomarker - look for the molecules themselves, if they're easy to assay, or look for circulating antibodies that bind to them). This direct route has only become feasible in recent years, with advanced mass spec and data handling techniques, but it's still a pretty formidable challenge. (Here's a review of the field).

A new paper in Cell takes another approach. The authors figured that antigen molecules would probably look like rather weirdly modified peptides, so they generated a library of several thousand weirdo "peptoids". (These are basically poly-glycines with anomalous N-substituents). They put these together as a microarray and used them as probes against serum from animal models of disease.

Rather surprisingly, the idea seems to have worked. In a rodent model of multiple sclerosis (the EAE, or experimental autoimmune encephalitis model), they found several peptoids that pulled down antibodies from the model animals and not from the controls. A time course showed that these antibodies came on at just the speed expected for an immune response in the animal model. As a control, another set of mice were immunized with a different (non-disease-causing) protein, and a different set of peptoids pulled down those resulting antibodies, with little or no cross-reactivity.

Finally, the authors turned to a real-world case: Alzheimer's disease. They tried out their array on serum from six Alzheimer's patients, versus six age-matched controls, and six Parkinson's patients as another control, and found three peptoids that seems to have about a 3-fold window for antibodies in the AD group. Further experimentation (passing serum repeated over these peptoids before assaying) showed that two of them seem to react with the same antibody, while one of them has a completely different partner. These experiments also showed that they are indeed pulling down the same antibodies in each of the patients, which is an important thing to make sure of.

Using those three peptoids by themselves, they tried a further 16 AD patient samples, 16 negative controls, and 6 samples from patients with lupus, all blinded, and did pretty well: the lupus patients were clearly distinguished as weak binders, the AD patients all showed strong binding, and 14 out of the 16 control patients showed weak binding. Two of the controls, though, showed raised levels of antibody detection, up to the lowest of the AD patients.

So while this isn't good enough for a diagnostic yet, for a blind shot into the wild blue immunological yonder, it's pretty impressive. Although. . .there's always the possibility that this is already good enough, and that the test picked up presymptomatic Alzheimer's in those two control patients. I suppose we're going to have to wait to find that out. As you'd imagine, the authors are extending these studies to wider patient populations, trying to make the assay easier to run, and trying to find out what native antigens these antibodies might be recognizing. I wish them luck, and I hope that it turns out that the technique can be applied to other diseases as well. This should keep a lot of people usefully occupied for quite some time!

Comments (18) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Central Nervous System

December 7, 2010

Arsenic Bacteria: Does The Evidence Hold Up?

Email This Entry

Posted by Derek

It's time to revisit the arsenic-using bacteria paper. I wrote about it on the day it came out, mainly to try to correct a lot of the poorly done reporting in the general press. These bacteria weren't another form of life, they weren't from another planet, they weren't (as found) living on arsenic (and they weren't "eating" it), and so on.

Now it's time to dig into the technical details, because it looks like the arguing over this work is coming down to analytical chemistry. Not everyone is buying the conclusion that these bacteria have incorporated arsenate into their biomolecules, with the most focused objections being found here, from Rosie Redfield at UBC.

So, what's the problem? Let's look at the actual claims of the paper and see how strong the evidence is for each of them:

Claim 1: the bacteria (GFAJ-1) grow on an arsenate-containing medium with no added phosphate. The authors say that after several transfers into higher-arsentic media, they're maintaining the bacteria in the presence of 40 mM arsenate, 10 mM glucose, and no added phosphate. But that last phrase is not quite correct, since they also say that there's about 3 micromolar phosphate present from impurities in the other salts.

So is that enough? Well, the main evidence is that (as shown in their figure 1), that if you move the bacteria to a medium that doesn't have the added arsenate (but still has the background level of phosphate) that they don't grow. With added arsenate they do, but slowly. And with added phosphate, as mentioned before, they grow more robustly. It looks to me as if the biggest variable here might be the amount of phosphate that could be contaminating the arsenate source that they use. But their table S1 shows that the low level of phosphate in the media is the same both ways, whether they've added arsenate or not. Unless something's gone wrong with that measurement, that's not the answer.

One way or another, the fact that these bacteria seem to use arsenate to grow seems hard to escape. And they're not the kind of weirdo chemotroph to be able to run off arsenate/arsenite redox chemistry (if indeed there are any bacteria that use that system at all). (The paper does get one look at arsenic oxidation states in the near-edge X-ray data, and they don't see anything that corresponds to the plus-3 species). That would appear to leave the idea that they're using arsenate per se as an ingredient in their biochemistry - otherwise, why would they start to grow in its presence? (The Redfield link above takes up this question, wondering if the bacteria are scavenging phosphorus from dead neighbor cells, and points out that the cells may actually still be growing slowly without either added arsenic or phosphate).

Claim 2: the bacteria take up arsenate from the growth medium. To check this, the authors measured intracellular arsenic by ICP mass spec. This was done several ways, and I'll look at the total dry weight values first.

Those arsenic levels were rather variable, but always run high. Looking at the supplementary data, there are some large differences between two batches of bacteria, one from June and one from July. And there's also some variability in the assay itself: the June cells show between 0.114 and 0.624% arsenic (as the assay is repeated), while the July cells show much lower (and tighter) values, between 0.009% and 0.011%. Meanwhile, the corresponding amount of phosphorus is 0.023% to 0.036% in June (As/P of 5 up to 27), and 0.011 to 0.014 in July (As/P of 0.76 to 0.97).

The paper averages these two batches of cells, but it certainly looks like the June bunch were much more robust in their uptake of arsenate. You might look at the July set and think, man, those didn't work out at all, since they actually have more phosphorus than arsenic in them. But the background state should be way lower than that. When you look at the corresponding no-arsenic cell batches, the differences are dramatic in both June and July. The June batch showed at least ten times as much phosphorus in them, and a thousand times less arsenic, and the July run of no-arsenate cells showed (compared to the July arsenic bunch) 60 times as much phosphorus and 1/10th the arsenic. The As/P ratio for both sets hovers around 0.001 to 0.002.

I'll still bet the authors were very disappointed that the July batch didn't come back as dramatic as the June ones. (And I have to give them some credit for including both batches in the paper, and not trying just to make it through with the June-bugs). One big question is what happens when you run the forced-arsenate-growth experiment more times: are the June cells typical, or some sort of weird anomaly? And do they still have both groups growing even now?

One of the points the authors make is that the arsenate-grown cells don't have enough phosphorus to survive. Rosie Redfield doesn't buy this one, and I'll defer to her expertise as a microbiologist. I'd like to hear some more views on this, because it's a potentially important. There are several possibilities - from most exciting to least:

1. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate, to the point that they can live even below the level of phosphorus needed to normally keep them alive. They probably still need a certain core amount of phosphate, though. This is the position of the paper's authors.

2. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate. But they still have an amount of phosphate present that would keep them going, so the arsenate must be in "non-critical" biochemical spots - basically, the ones that can stand having it. (This sounds believable, but we still have to explain the growth in the presence of arsenate).

3. The bacteria prefer phosphorus, but are able to take up and incorporate substantial amounts of arsenate. This arsenate, though, is sequestered somehow and is not substituting for phosphate in the organisms' biochemistry. (In this case, you'd wonder why the bacteria are taking up arsenate at all, if they're just having to ditch it. Perhaps they can't pump it out efficiently enough?) And again, we'd have to explain the growth in the presence of arsenate - for a situation like this, you'd think that it would hurt, rather than help, by imposing an extra metabolic burden. I'm assuming here, for the sake of argument, that the whole grows-in-the-presence-of-arsenate story is correct.

Claim 3: the bacteria incorporate arsenate into their DNA as a replacement for phosphate. This is an attempt to distinguish between the possibilities just listed. I think that authors chose the bacterial DNA because DNA has plenty of phosphate, is present in large quantities and can be isolated by known procedures (as opposed to lots of squirrely little phosphorylated small molecules), and would be a dramatic example of arsenate incorporation. These experiments were done by giving the bacteria radiolabeled arsenate, and looking for its distribution.

Rosie Redfield has a number of criticisms of the way the authors isolated the DNA in these experiments, and again, since I'm not a microbiologist, I'll stand back and let that argument take place without getting involved. It's worth noting, though, that most (80%) of the label was in the phenol fraction of the initial extraction, which should have proteins and smaller-molecular-weight stuff in it. Very little showed up in the chloroform fraction (where the lipids would be), and most of the rest (11%) was in the final aqueous layer, where the nucleic acids should accumulate. Of course, if (water-soluble) arsenate was just hanging around, and not being incorporated into biomolecules, the distribution of the label might be pretty similar.

I think a very interesting experiment would be to take non-arsenate-grown GFAJ-1 bacteria, make pellets out of them as was done in this procedure, and then add straight radioactive arsenate to that mixture, in roughly the amounts seen in the arsenate-grown bacteria. How does the label distribute then, as the extractions go on?

Here we come to one of my biggest problems with the paper, after a close reading. When you look at the Supplementary Material, Table S1, you see that the phenol extract (where most of the label was), hardly shows any difference in total arsenic amounts, no matter if the cells were grown high arsenate/no phosphorus or high phosphorus/no arsenate. The first group is just barely higher than the second, and probably within error bars, anyway.

That makes me wonder what's going on - if these cells are taking up arsenate (and especially if they grow on it), why don't we see more of it in the phenol fraction, compared to bacteria that aren't exposed to it at all? Recall that when arsenic was measured by dry weight, there was a real difference. Somewhere there has to be a fraction that shows a shift, and if it's not in the place where 80% of the radiolabel goes, then were could that be?

I think that the authors would like to say "It's in the DNA", but I don't see that data as supporting enough of a change in the arsenic levels. In fact, although they do show some arsenate in purified DNA, the initial DNA/RNA extract from the two groups (high As/no P and no As/high P) shows more arsenic in the bacteria that weren't getting arsenic at all. (These are the top two lines in Table S1 continued, top of page 11 in the Supplementary Information). The arsenate-in-the-DNA conclusion of this paper is, to my mind, absolutely the weakest part of the whole thing.

Conclusion: All in all, I'm very interested in these experiments, but I'm now only partly convinced. So what do the authors need to shore things up? As a chemist, I'm going to ask for more chemical evidence. I'd like to see some mass spec work done on cellular extracts, comparing the high-arsenic and no-arsenic groups. Can we see evidence of arsenate-for-phosphate in the molecular weights? If DNA was good enough to purify with arsenate still on it, how about the proteome? There are a number of ways to look that over by mass-spec techniques, and this really needs to be done.

Can any of the putative arsenate-containing species can be purified by LC? LC/mass spec data would be very strong evidence indeed. I'd recommend that the authors look into this as soon as possible, since this could address biomolecules of all sizes. I would assume that X-ray crystallography data on any of these would be a long shot, but if the LC purification works, it might be possible to get enough to try. It would certainly shut everyone up!

Update: this seems like the backlash day. Nature News has a piece up, which partially quotes from this article Carl Zimmer over at Slate.

Comments (35) + TrackBacks (0) | Category: Biological News | General Scientific News

November 17, 2010

Roche Has Problems - But RNA Interference Has More

Email This Entry

Posted by Derek

So Roche is (as long rumored) going through with a 6% headcount reduction, worldwide. That's bad news, but not unexpected bad news, and it certainly doesn't make them stand out from the rest of big pharma. This sort of headline has been relentlessly applicable for several years now.

What surprised me was their announcement that they're giving up on RNA interference as a drug mechanism. That's the biggest vote of no-confidence yet for RNAi, which has been a subject of great interest (and a lot of breathless hype) for some years now. (There's been a lot of discussion around here about the balance between those two).

That's not the sort of news that the smaller companies in this space needed. Alnylam, considered the leader in the field, already had over $300 million from Roche (back in 2007), but so much for anything more. The company is already putting on a brave face. It has not been a good fall season: they were already having to cut back after Novartis recently thanked them for their five-year deal, shook their hand, and left. To be sure, Novartis said that they're going to continue to develop the targets from the collaboration, and would pay milestones to Alnylam as any of them progress - but they apparently didn't feel as if they needed Alnylam around while they did so.

Then there's Tekmira, who had a deal with Roche for nanoparticle RNAi delivery. They're out with a statement this morning, too, saying (correctly) that they have other deals which are still alive. But there's no way around the fact that this is bad news.

What we don't know is what's going on in the other large companies (the Mercks, Pfizers, and so on) who have been helping to fund a lot of this work. Are they wondering what in the world Roche is up to? Looking at it as a market opportunity, and glad to see less competition? Or wishing that they could do the same thing?

Comments (23) + TrackBacks (0) | Category: Biological News | Business and Markets

November 12, 2010

And Now, the Retractome

Email This Entry

Posted by Derek

Back in January, I wrote about the controversial "Reactome" paper that had appeared in Science. This is the one that claimed to have immobilized over 1600 different kinds of biomolecules onto nanoparticles, and then used chemical means to set off a fluorescence assay when any protein recognized them. When actual organic chemists got a look at their scheme - something that apparently never happened during the review process - flags went up. As shown in that January post (and all over the chemical blogging world), the actual reactions looked, well, otherwordly.

Science was already backtracking within the first couple of months, and back in the summer, an institutional committee recommended that it be withdrawn. Since then, people have been waiting for the thunk of another shoe dropping, and now it's landed: the entire paper has been retracted. (More at C&E News). The lead author, though, tells Nature that other people have been using his methods, as described, and that he's still going to clear everything up.

I'm not sure how that's going to happen, but I'll be interested to see the attempt being made. The organic chemistry in the original paper was truly weird (and truly unworkable), and the whole concept of being able to whip up some complicated reactions schemes in the presence of a huge number of varied (and unprotected) molecules didn't make sense. The whole thing sounded like a particularly arrogant molecular biologist's idea of how synthetic chemistry should work: do it like a real biologist does! Sweeping boldly across the protein landscape, you just make them all work at the same time - haven't you chemists every heard of microarrays? Of proteomics? Why won't you people get with the times?

And the sorts of things that do work in modern biology would almost make you believe in that approach, until you look closely. Modern biology depends, though, on a wonderful legacy, a set of incredible tools bequeathed to us by billions of years of the most brutal product-development cycles imaginable (work or quite literally die). Organic chemistry, though, had no Aladdin's cave of enzymes and exquisitely adapted chemistries to stumble into. We've had to work everything out ourselves. And although we've gotten pretty good at it, the actions of something like RNA polymerase still look like the works of angels in comparison.

Comments (13) + TrackBacks (0) | Category: Biological News | The Scientific Literature

November 8, 2010

Epigenetics: The Code Isn't The Object

Email This Entry

Posted by Derek

Here's an excellent background article on epigenetics, especially good for getting up to speed if you haven't had the opportunity to think about what gene transcription must really be like down on a molecular level.

This also fits in well with some of the obituaries that I and others have written for the turn-of-the-millennium genomics frenzy. There is, in short, an awful lot more to things than just the raw genetic code. And as time goes on, the whole the-code-is-destiny attitude that was so pervasive ten years ago (the air hasn't completely cleared yet) is looking more and more mistaken.

Comments (17) + TrackBacks (0) | Category: Biological News

November 3, 2010

TRIM21: A Cure For the Common Cold? Maybe Not. . .

Email This Entry

Posted by Derek

This article is getting the "cure for the common cold" push in a number of newspaper headlines and blog posts. I'm always alert for those, because, as a medicinal chemist, I can tell you that finding a c-for-the-c-c is actually very hard. So how does this one look?

I'd say that this falls into the "interesting discovery, confused reporting" category, which is a broad one. The Cambridge team whose work is getting all the press has actually found something that's very much worth knowing: that antibodies actually work inside human cells. Turns out that when antibody-tagged viral particles are taken up into cells, they mark the viruses for destruction in the proteosome, an organelle that's been accurately compared to an industrial crushing machine at a recycling center. No one knew this up until now - the thought had been that once a virus succeeds in entering the cell, that the game was pretty much up. But now we know that there is a last line of defense.

Some of the press coverage makes it sound as if this is some new process, a trick that cells have now been taught to perform. But the point is that they've been doing it all along (at least to nonenveloped viruses with antibodies on them), and that we've just now caught on. Unfortunately, that means that all our viral epidemics take place in the face of this mechanism (although they'd presumably be even worse without it). So where does this "cure for the common cold" stuff come in?

That looks like confusion over the mechanism to me. Let's go to the real paper, which is open-access in PNAS. The key protein in this process has been identified as tripartite-motif 21 (TRIM21), which recognized immunoglobin G and binds (extremely tightly, sub-nanomolar) to antibodies. This same group identified this protein a few years ago, and found that it's highly conserved across many species, and binds an antibody region that never changes - strong clues that it's up to something important.

Another region of TRIM21 suggested what that might be. It has a domain that's associated with ubiquitin ligase activity, and tagging something inside the cell with ubiquitin is like slapping a waste-disposal tag on it. Ubiquinated proteins tend to either get consumed where they stand or dragged off to the proteosome. And sure enough, a compound that's known to inhibit the action of the proteosome also wiped out the TRIM21-based activity. A number of other tests (for levels of ubiquitination, localization within the cell, and so on) all point in the same direction, so this looks pretty solid.

But how do you turn this into a therapy, then? The newspaper articles have suggested it as a nasal spray, which raises some interesting questions. (Giving it orally is a nonstarter, I'd think: with rare exceptions, we tend to just digest every protein that gets into the gut, so all a TRIM21 pill would do is provide you with a tiny (and expensive) protein supplement). Remember, this is an intracellular mechanism; there's presumably not much of a role for TRIM21 outside the cell. Would a virus/antibody/TRIM21 complex even get inside the cell to be degraded? On the other hand, if that kept the virus from even entering the cell, that would be an effective therapy all its own, albeit through a different mechanism than ever intended.

But hold on: there must be some reason why this mechanism doesn't always work perfectly - otherwise, no nonenveloped virus would have much of a chance. My guess is that the TRIM21 pathway is pretty efficient, but that enough viral particles miss getting labeled by antibodies to keep it from always triggering. If that's true, then TRIM21 isn't the limiting factor here - it's antibody response. If that's true, then it could be tough to rev up this pathway.

Still, these are early days. I'm very happy to see this work, because it shows us (again) how much we don't know about some very important cellular processes. Until this week, no one ever realized that there was such a thing as an intracellular antibody response. What else don't we know?

Comments (12) + TrackBacks (0) | Category: Biological News | Infectious Diseases

November 1, 2010

Are Genes Patentable Or Not?

Email This Entry

Posted by Derek

There seems to be some disagreement within the US government on the patentability of human genes. The Department of Justice filed an amicus brief (PDF) in the Myriad Genetics case involving the BRCA genes, saying that it believes that genes are products of nature, and therefore unpatentable.

But this goes opposite to the current practice of the US Patent and Trademark Office, which does indeed grant such patents. No lawyers from the PTO appear on the brief, which may be a significant clue as to how they feel about this. And at any rate, gene patentability is going to be worked out in the courts, rather than by any sort of statement from any particular agency, which takes us back to the Myriad case. . .

Comments (20) + TrackBacks (0) | Category: Biological News | Patents and IP

October 21, 2010

Laser Nematode Surgery!

Email This Entry

Posted by Derek

There's a headline I've never written before, for sure. A new paper in PNAS describes an assay in nematodes to look for compounds that have an effect on nerve regeneration. That means that you have to damage neurons first, naturally, and doing that on something as small (and as active) as a nematode is not trivial.

The authors (a team from MIT) used microfluidic chips to direct single nematodes into a small chamber where they're held down briefly by a membrane. Then an operator picks out one of its neurons on an imagining screen, whereupon a laser beam cuts it. The nematode is then released into a culture well, where it's exposed to some small molecule to see what effect that has on the neuron's regrowth. It takes about 20 seconds to process a single C. elegans, in case you're wondering, and I can imagine that after a while you'd wish that they weren't streaming along quite so relentlessly.

The group tried about 100 bioactive molecules, targeting a range of known pathways, to see what might speed up or slow down nerve regeneration. As it happens, the highest hit rates were among the kinase inhibitors and compounds targeting cytoskeletal processes. (By contrast, nothing affecting vesicle trafficking or histone deacetylase activity showed any effect). The most significant hit was an old friend to kinase researchers, staurosporine. Interestingly, this effect was only seen on particular subtypes of neurons, suggesting that they weren't picking up some sort of broad-spectrum regeneration pathway.

The paper acknowledges that staurosporine has a number of different activities, but treats it largely as a PKC inhibitor. I'm not sure that that's a good idea, personally - I'd be suspicious of pinning any specific activity to that compound without an awful lot of follow-up, because it's a real Claymore mine when it comes to kinases. The MIT group did check to see if caspases (and apoptotic pathways in general) were involved, since those are well-known effects of staurosporine treatment, and they seem to have ruled those out. And they also followed up with some other PKC inhibitors, chelerythrine and Gö 6983, and these showed similar effects.

So they may be right about this being a PKC pathway, but that's a tough one to nail down. (And even if you do, there are plenty of PKC isoforms doing different things, but there aren't enough selective ligands known to unravel all those yet). Chelerythrine inhibits alanine aminotransferase, has had some doubts expressed about it before in PKC work, and also binds to DNA, which may be responsible for some of its activity in cells. Gö 6983 seems to be a better tool, but it's is in the same broad chemical class as staurosporine itself, so as a medicinal chemist I still find myself giving it the fishy eye.

This is very interesting work, nonetheless, and it's the sort of thing that no one's been able to do before. I'm a big fan of using the most complex systems you can to assay compounds, and living nematodes are a good spot to be in. I'd be quite interested in a broader screen of small molecules, but 20 seconds per nematode surgery is still too slow for the sort of thing a medicinal chemist like me would like to run - a diversity set of, say, ten or twenty thousand compounds, for starters. And there's always the problem we were talking about here the other day, about how easy it is to get compounds into nematodes at all. I wonder if there were some false negatives in this screen just because the critters had no exposure?

Comments (16) + TrackBacks (0) | Category: Biological News | Drug Assays | The Central Nervous System

October 7, 2010

More on Garage Biotech

Email This Entry

Posted by Derek

Nature has a good report and accompanying editorial on garage biotechnology, which I wrote about earlier this year.

. . .Would-be 'biohackers' around the world are setting up labs in their garages, closets and kitchens — from professional scientists keeping a side project at home to individuals who have never used a pipette before. They buy used lab equipment online, convert webcams into US$10 microscopes and incubate tubes of genetically engineered Escherichia coli in their armpits. (It's cheaper than shelling out $100 or more on a 37 °C incubator.) Some share protocols and ideas in open forums. Others prefer to keep their labs under wraps, concerned that authorities will take one look at the gear in their garages and label them as bioterrorists.

For now, most members of the do-it-yourself, or DIY, biology community are hobbyists, rigging up cheap equipment and tackling projects that — although not exactly pushing the boundaries of molecular biology — are creative proof of the hacker principle. . .

The article is correct when it says that a lot of what's been written about the subject is hype. But not all of it is. I continue to think that as equipment becomes cheaper and more capable, which is happening constantly, that more and more areas of research will move into the "garage-capable" category. Biology is suited to this sort of thing, because there are such huge swaths of it that aren't well understood, and there are always more experiments to be set up than anyone can run.

And it's encouraging to see that the FBI isn't coming down hard on these people, but rather trying to stay in touch with them and learn about the field. Considering where and how some of the largest tech companies in the US started out, I would not want to discourage curious and motivated people from exploring new technologies on their own - just the opposite. Scientific research is most definitely not a members-only club; anyone who thinks that they have an interesting idea should come on down. So while I do worry about the occasional maniac misanthrope, I think I'm willing to take the chance. And besides, the only way we're going to be able to deal with the lunatics is through better technology of our own.

Comments (34) + TrackBacks (0) | Category: Biological News | Who Discovers and Why

October 6, 2010

Chemical Biology: Engineering Enzymes

Email This Entry

Posted by Derek

I mentioned directed evolution of enzymes the other day as an example of chemical biology that’s really having an industrial impact. A recent paper in Science from groups at Merck and Codexis really highlights this. The story they tell had been presented at conferences, and had impressed plenty of listeners, so it’s good to have it all in print.

It centers on a reaction that’s used to produce the diabetes therapy Januvia (sitagliptin). There’s a key chiral amine in the molecule, which had been produced by asymmetric hydrogenation of an enamine. On scale, though, that’s not such a great reaction. Hydrogenation itself isn’t the biggest problem, although if you could ditch a pressurized hydrogen step for something that can’t explode, that would be a plus. No, the real problem was that the selectivity wasn’t quite what it should be, and the downstream material was contaminated with traces of rhodium from the catalyst.

So they looked at using a transaminase enzyme instead. That’s a good idea, because transaminases are one of those enzyme classes that do something that we organic chemists generally can’t usually do very well – in this case, change a ketone to a chiral amino group in one step. (It takes another amine and oxidizes that on the other side of the reaction). We’ve got chiral reductions of imines and enamines, true, but those almost always need a lot of fiddling around for catalysts and conditions (and, as in this case, can cause their own problems even when they work). And going straight to a primary amine can be, in any case, one of the more difficult transformations. Ammonia itself isn’t too reactive, and you don’t have much of a steric handle to work with.
sitagliptan%20rxn.png

But transaminases have their idiosyncracies (all enzymes do). They generally only will accept methyl ketones as substrates, and that’s what these folks found when they screened all the commercially available enzymes. Looking over the structure (well, a homology model of the structure) of one of these (ATA-117), which would be expected to give the right stereochemistry if it could be made to give anything whatsoever, gave some clues. There’s a large binding pocket on one side of the ketone, which still wasn’t quite large enough for the sitagliptin intermediate, and a small site on the other side, which definitely wasn’t going to take much more than a methyl group.

They went after the large binding pocket first. A less bulky version of the desired substrate (which had been turned, for now, into a methyl ketone) showed only 4% conversion with the starting enzymes. Mutating the various amino acids that looked important for large-pocket binding gave some hope. Changing a serine to phenylalanine, for example, cranked up the activity by 11-fold. The other four positions were, as the paper said, “subjected to saturation mutagenesis”, and they also produced a combinatorial library of 216 multi-mutant variations.

Therein lies a tale. Think about the numbers here: according to the supplementary material for the paper, they varied twelve residues in the large binding pocket, with (say) twenty amino acid possibilities per. So you’ve got 240 enzyme variants to make and test. Not fun, but it’s doable if you really want to. But if you’re going to cover all the multi-mutant space, that’s twenty to the 12th, or over four quadrillion enzyme candidates. That’s not going to happen with any technology that I can easily picture right now. And you’re going to want to sample this space, because enzyme amino acid residues most certainly do affect each other. Note, too, that we haven’t even discussed the small pocket, which is going to have to be mutated, too .

So there’s got to be some way to cut this problem down to size, and that (to my mind) is one of the things that Codexis is selling. They didn’t, for example, get a darn thing out of the single-point-mutation experiments. But one member of a library of 216 multi-mutant enzymes showed the first activity toward the real sitagliptin ketone precursor. This one had three changes in the small pocket and that one P-for-S in the large, and identifying where to start looking for these is truly the hard part. It appears to have been done through first ruling out the things that were least likely to work at any given residue, followed by an awful lot of computational docking.

It’s not like they had the Wonder Enzyme just yet, although just getting anything to happen at all must have been quite a reason to celebrate. If you loaded two grams/liter of ketone, and put in enzyme at 10 grams/liter (yep, ten grams per liter, holy cow), you got a whopping 0.7% conversion in 24 hours. But as tiny as that is, it’s a huge step up from flat zero.

Next up was a program of several rounds of directed evolution. All the variants that had shown something useful were taken through a round of changes at other residues, and the best of these combinations were taken on further. That statement, while true, gives you no feel at all for what this stuff is like, though. There are passages like this in the experimental details:

At this point in evolution, numerous library strategies were employed and as beneficial mutations were identified they were added into combinatorial libraries. The entire binding pocket was subjected to saturation mutagenesis in round 3. At position 69, mutations TAS and C were improved over G. This is interesting in two aspects. First, V69A was an option in the small pocket combinatorial library, but was less beneficial than V69G. Second, G69T was improved (and found to be the most beneficial in the next
round) suggesting that something other than sterics is involved at this position as it was a Val in the starting enzyme. At position 137, Thr was found to be preferred over Ile. Random mutagenesis generated two of the mutations in the round 3 variant: S8P and G215C. S8P was shown to increase expression and G215C is a surface exposed mutation which may be important for stability. Mutations identified from homologous enzymes identified M94I in the dimer interface as a beneficial mutation. In subsequent rounds of evolution the same library strategies were repeated and expanded. Saturation mutagenesis of the secondary sphere identified L61Y, also at the dimer interface, as being beneficial. The repeated saturation mutagenesis of 136 and 137 identified Y136F and T137E as being improved.

There, that wasn’t so easy, was it? This should give you some idea of what it’s like to engineer an enzyme, and what it’s like to go up against a billion years of random mutation. And that’s just the beginning – they ended up doing ten rounds of mutations, and had to backtrack some along the way when some things that looked good turned out to dead-end later on. Changes were taken on to further rounds not only on the basis of increased turnover, but for improved temperature and pH stability, tolerance to DMSO co-solvent, and so on. They ended up, over the entire process, screening a total of 36,480 variations, which is a hell of a lot, but is absolutely infinitesmal compared to the total number of possibilities. Narrowing that down to something feasible is, as I say, what Codexis is selling here.

And what came out the other end? Well, recall that the known enzymes all had zero activity, so it’s kind of hard to calculate improvement from that. Comparing to the first mutant that showed anything at all, they ended up with something that was about 27,000 times better. This has 27 mutations from the original known enzyme, so it’s a rather different beast. The final enzyme runs in DMSO/water, at loadings up of to 250g/liter of starting material at 3 weight per cent enzyme loading, and turns isopropylamine into acetone while it’s converting the prositagliptin ketone to product. It is completely stereoselective (they’ve never seen the other amine), and needless to say involves no hydrogen tanks and furnishes material that is not laced with rhodium metal.

This is impressive stuff. You'll note, though, the rather large amount of grunt work that had to go into it, although keep in mind, the potential amount of grunt work would be more than the output of the entire human race. To date. Just for laughs, an exhaustive mutational analysis of twenty-seven positions would give you 1.3 times ten to the thirty-fifth possibilities to screen, and that's if you know already which twenty-seven positions you're going to want to look at. One microgram of each of them would give you the mass of about a hundred Earths, not counting the vials. Not happening.

Also note that this is the sort of thing that would only be done industrially, in an applied research project. Think about it: why else would anyone go to this amount of trouble? The principle would have been proven a lot earlier in the process, and the improvements even part of the way through still would have been startling enough to get your work published in any journal in the world and all your grants renewed. Academically, you'd have to be out of your mind to carry things to this extreme. But Merck needs to make sitagliptin, and needs a better way to do that, and is willing to pay a lot of money to accomplish that goal. This is the kind of research that can get done in this industry. More of this, please!

Comments (33) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | Drug Development

September 23, 2010

Chemical Biology - The Future?

Email This Entry

Posted by Derek

I agree with many of the commenters around here that one of the most interesting and productive research frontiers in organic chemistry is where it runs into molecular biology. There are so many extraordinary tools that have been left lying around for us by billions of years of evolution; not picking them up and using them would be crazy.

Naturally enough, the first uses have been direct biological applications - mutating genes and their associated proteins (and then splicing them into living systems), techniques for purification, detection, and amplification of biomolecules. That's what these tools do, anyway, so applying them like this isn't much of a shift (which is one reason why so many of these have been able to work so well). But there's no reason not to push things further and find our own uses for the machinery.

Chemists have been working on that for quite a while. We look at enzymes and realize that these are the catalysts that we really want: fast, efficient, selective, working at room temperature under benign conditions. If you want molecular-level nanotechnology (not quite down to atomic!), then enzymes are it. The ways that they manipulate their substrates are the stuff of synthetic organic daydreams: hold down the damn molecule so it stays in one spot, activate that one functional group because you know right where it is and make it do what you want.

All sorts of synthetic enzyme attempts have been made over the years, with varying degrees of success. None of them have really approached the biological ideals, though. And in the "if you can't beat 'em, join 'em" category, a lot of work has gone into modifying existing enzymes to change their substrate preferences, product distributions, robustness, and turnover. This isn't easy. We know the broad features that make enzymes so powerful - or we think we do - but the real details of how they work, the whole story, often isn't easy to grasp. Right, that oxyanion hole is important: but just exactly how does it change the energy profile of the reaction? How much of the rate enhancement is due to entropic factors, and how much to enthalpic ones? Is lowering the energy of the transition state the key, or is it also a subtle raising of the energy of the starting material? What energetic prices are paid (and earned back) by the conformational changes the protein goes through during the catalytic cycle? There's a lot going on in there, and each enzyme avails itself of these effects differently. If it weren't such a versatile toolbox, the tools themselves wouldn't come out being so darn versatile.

There's a very interesting paper that's recently come on on this sort of thing, to which I'll devote a post by itself. But there are other biological frontiers beside enzymes. The machinery to manipulate DNA is exquisite stuff, for example. For quite a while, it wasn't clear how we organic chemists could hijack it for our own uses - after all, we don't spend a heck of a lot of time making DNA. But over the years, the technique of adding DNA segments onto small molecules and thus getting access to tools like PCR has been refined. There are a number of applications here, and I'd like to highlight some of those as well.

Then you have things like aptamers and other recognition technologies. These are, at heart, ways to try to recapitulate the selective binding that antibodies are capable of. All sorts of synthetic-antibody schemes have been proposed - from manipulating the native immune processes themselves, to making huge random libraries of biomolecules and zeroing in on the potent ones (aptamers) to completely synthetic polymer creations. There's a lot happening in this field, too, and the applications to analytical chemistry and purification technology are clear. This stuff starts to merge with the synthetic enzyme field after a point, too, and as we understand more about enzyme mechanisms that process looks to continue.

So those are three big areas where molecular biology and synthetic chemistry are starting to merge. There are others - I haven't even touched here on in vivo reactions and activity-based proteomics, for example, which is great stuff. I want to highlight these things in some upcoming posts, both because the research itself is fascinating, and because it helps to show that our field is nowhere near played out. There's a lot to know; there's a lot to do.

Comments (33) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical News | General Scientific News | Life As We (Don't) Know It

August 26, 2010

Vinca Alkaloids, And Where They End Up

Email This Entry

Posted by Derek

The Vinca alkaloids are some of the most famous chemotherapy drugs around - vincristine and vinblastine, the two most widely used, are probably shown in every single introduction to natural products chemistry that's been written in the past fifty years. But making them synthetically is a bear, and extracting them from the plant is a low-yielding pain.

A new paper in PNAS shows that there's still a lot that we don't know about these compounds. What has been known for a long time is that they're derived from two precursor alkaloids, vindoline and catharanthine. This new work shows that the plants deliberately keep those two compounds separated from each other, which helps account for the low yield of the final compounds.

As it turns out, if you dip the leaves in chloroform, which dissolves the waxy coating from the surface, you find that basically all the catharanthine is found there. At the same time, even soaking the leaves in chloroform for as long as an hour hardly extracts any vindoline - it's sequestered away inside the cells of the leaves. The enzymes responsible for biosynthesis are probably also in different locations (or cell types), and there are unknown transport mechanisms involved as well. This is the first time anyone's found such a secreted alkaloid mechanism.

Why does Vinca go to all the trouble? For one thing, catharanthine is a defense against insect pests, and it also seems to inhibit attack by fungal spores. And what the vindoline is doing, I'm not sure - but the plant probably has a good reason to keep it away from the cantharanthine, because producing too much vincristine, vinblastine, etc. would probably kill off its dividing cells, the same way it works in chemotherapy.

The authors suggest that people should start looking around to see if other plants have similar secretion mechanisms. And this makes me wonder if this could be a way to harvest natural products - do the plants survive after having their leaves dipped in solvent? If they do, do they then re-secrete more natural waxes to catch up? I'm imagining a line of plants, growing in pots on some sort of conveyor line, flipping upside down for a quick wash-and-shake through a trough of chloroform, and heading back into the greenhouse. . .but then, I have a vivid imagination. . .

Comments (9) + TrackBacks (0) | Category: Biological News | Natural Products

August 18, 2010

Reverse-Engineering the Human Brain? Really?

Email This Entry

Posted by Derek

News like today's gamma-secretase failure makes me want to come down even harder on stuff like this. Ray Kurzweil, whom I've written about before, seems to be making ever-more-optimistic predictions with ever-more-shortened timelines. This time, he's saying that reverse-engineering the human brain may be about a decade away.

I hope he's been misquoted, or that I'm not understanding him correctly. But some of his other statements from this same talk make me wonder:

Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

This is hand-waving, and at a speed compatible with powered flight. It would be much less of a leap to say that the Oxford English Dictionary and a grammar textbook are sufficient to write the plays that Shakespeare didn't get around to. And while I don't believe that the brain is a designed artifact like The Tempest (or Tempest II: The Revenge of Caliban), I do most certainly believe that it is an object whose details will keep us busy for more than ten years.

Saying that its entire design is in the genome is deeply silly, mistaken, and misleading. The information in the genome takes advantage of so much downstream processing and complexity in a way that no computer program ever has, and that makes comparing it to lines of code laughable. I mean, lines of code have basically one level of reality to them: they're instructions to deal with data. But the genomic code is a set of instructions to make another set of instructions (RNA), which tells how to make another even more complex pile of multifunctional tools (proteins), which go on to do a bewildering variety of other things. And each of these can feed back on themselves, co-operate with and modulate the others in real time, and so on. Billions of years of relentless pressure (work well, or die) have shaped every intricate detail. The result makes the most complex human designs look like toys.

So here I am, absolutely stunned and delighted when I can make tiny bits of this machinery alter their course in a way that doesn't make the rest of it fall to pieces - a feat that takes years of unrelenting labor and hundreds of millions of dollars. And Ray Kurzweil is telling me that it's all just code. And not that much code, either. Have it broken down soon we will, no sweat. Sure.

I see that PZ Myers has come to the same conclusion. I don't see how anyone who's ever worked in molecular biology, physiology, cell biology, or medicinal chemistry could fail to, honestly. . .

Comments (37) + TrackBacks (0) | Category: Biological News | The Central Nervous System

August 9, 2010

Maybe We Should Make It More of a Game

Email This Entry

Posted by Derek

David Baker's lab at the University of Washington has been working on several approaches to protein structure problems. I mentioned Rosetta@home here, and now the team has published an interesting paper on another one of their efforts, FoldIt.

That one, instead of being a large-scale passive computation effort, is more of an active process - in fact, it's active enough that it's designed as a game:

We hypothesized that human spatial reasoning could improve both the sampling of conformational space and the determination of when to pursue suboptimal conformations if the stochastic elements of the search were replaced with human decision making while retaining the deterministic Rosetta algorithms as user tools. We developed a multiplayer online game, Foldit, with the goal of producing accurate protein structure models through gameplay. Improperly folded protein conformations are posted online as puzzles for a fixed amount of time, during which players interactively reshape them in the direction they believe will lead to the highest score (the negative of the Rosetta energy). The player’s current status is shown, along with a leader board of other players, and groups of players working together, competing in the same puzzle.

So how's it working out? Pretty well, actually. It turns out that human players are willing to do more extensive rearrangements to the protein chains in the quest for lower energies than computational algorithms are. They're also better at evaluating which positions to start from. Both of these remind me of the differences between human chess play and machine play, as I understand them, and probably for quite similar reasons. Baker's team is trying to adapt the automated software to use some of the human-style approaches, when feasible.

There are several dozen participants who clearly seem to have done better in finding low-energy structures than the rest of the crowd. Interestingly, they're mostly not employed in the field, with "Business/Financial/Legal" making up the largest self-declared group in a wide range of fairly even-distributed categories. Compared to the "everyone who's played" set, the biggest difference is that there are far fewer students in the high-end group, proportionally. That group of better problem solvers also tends to be slightly more female (although both groups are still mostly men), definitely older (that loss of students again), and less well-stocked with college graduates and PhDs. Make of that what you will.

Their conclusion is worth thinking about, too:

The solution of challenging structure prediction problems by Foldit players demonstrates the considerable potential of a hybrid human–computer optimization framework in the form of a massively multiplayer game. The approach should be readily extendable to related problems, such as protein design and other scientific domains where human three-dimensional structural problem solving can be used. Our results indicate that scientific advancement is possible if even a small fraction of the energy that goes into playing computer games can be channelled into scientific discovery.

That's crossed my mind, too. In my more pessimistic moments, I've imagined the human race gradually entertaining itself to death, or at least to stasis, as our options for doing so become more and more compelling. (Reading Infinite Jest a few years ago probably exacerbated such thinking). Perhaps this is one way out of that problem. I'm not sure that it's possible to make a game compelling enough when it's hooked up to some sort of useful gear train, but it's worth a try.

Comments (16) + TrackBacks (0) | Category: Biological News | In Silico | Who Discovers and Why

July 29, 2010

Craig Venter, Venting

Email This Entry

Posted by Derek

Craig Venter has never been a person to keep a lot of things bottled up inside him. But check out this interview with Der Speigel for even more candor than usual. For instance:

SPIEGEL: Some scientist don't rule out a belief in God. Francis Collins, for example …

Venter: … That's his issue to reconcile, not mine. For me, it's either faith or science - you can't have both.

SPIEGEL: So you don't consider Collins to be a true scientist?

Venter: Let's just say he's a government administrator.

There's more where that came from. The title is "We Have Learned Nothing From the Genome", and it just goes right on from there. Well worth a look.

Comments (78) + TrackBacks (0) | Category: Biological News

July 7, 2010

XMRV and Chronic Fatigue: You Thought You Were Confused Before

Email This Entry

Posted by Derek

Time to revisit the chronic fatigue/XMRV controversy, because it's become even crazier. To catch up, there was a 2009 report in Science that this little-known virus correlated strongly with patients showing the clinical syndrome. Criticism was immediate, with several technical comments and rebuttals coming out in the journal. Then researchers from the UK and Holland strongly challenged the original paper's data and said that they could not reproduce anything like it.

Recently I (and a lot of other people who write about science) received an e-mail claiming that a paper was about to come out from a group at the NIH that confirmed the first report. I let that one go by, since I thought I'd wait for, you know, the actual paper (for one thing, that would let me be sure that there really was one). Now Science reports that yes, there is such a manuscript. But. . .

Science has learned that a paper describing the new findings, already accepted by the Proceedings of the National Academy of Sciences (PNAS), has been put on hold because it directly contradicts another as-yet-unpublished study by a third government agency, the U.S. Centers for Disease Control and Prevention (CDC). That paper, a retrovirus scientist says, has been submitted to Retrovirology and is also on hold; it fails to find a link between the xenotropic murine leukemia virus-related virus (XMRV) and CFS. The contradiction has caused "nervousness" both at PNAS and among senior officials within the Department of Health and Human Services, of which all three agencies are part, says one scientist with inside knowledge.

I'll bet it has! It looks like the positive findings are from Harvey Alter at NIH, and the negative ones are from William Switzer at the CDC. Having two separate government labs blatantly contradict each other - simultaneously, yet - is what everyone seems to be trying to avoid. Sounds to me like each lab is going to have to try the other's protocols before this one gets ironed out. I wouldn't be expecting either paper to appear any time soon, if that's the case.

Update: Well, as it turns out, the Retrovirology paper has been published - so what's holding up PNAS? Might as well get them both out so everyone can compare. . .

Comments (33) + TrackBacks (0) | Category: Biological News | Infectious Diseases

June 29, 2010

Stable Helical Peptides Can Do It All?

Email This Entry

Posted by Derek

Now, this could get quite interesting. A recent paper in PNAS talks about "downsizing" biologically active proteins to much shorter mimics of the alpha-helical parts of their structures. These show a good deal more stability than the parents, and show a sometimes startling amount of biological activity.

The building block for all this is the smallest helical peptide yet reported, a cyclic pentapeptide (KAAAD) curled as as a lactam between residues 1 and 5. Joining two or more of these up give you more turns, and replacing the alanines gives you plenty of possible mimics of endogenous proteins. An analog of nociceptin turned out to be the most potent agonist at ORL-1 ever described (40 picomolar), and an analog of RSV fusion protein is, in its turn, the most potent inhibitor of that viral fusion process ever found as well.

Meanwhile, the paper states that these constrained peptides were stable in human serum for over 24 hours, as very much opposed to their uncyclized counterparts, which are degraded rapidly. (Exocyclic amino acids, when present, do get degraded off in a time span of hours, though).

I'm quite amazed by all this, and I'm still processing it myself. I'll let the authors have the last word for now:

"This work is a blueprint for design and utility of constrained α-helices that can substitute for α-helical protein sequences as short as five amino acids. . .The promising conformational and chemical stability suggests many diverse applications in biology as molecular probes, drugs, diagnostics, and possibly even vaccines. The constrained peptides herein offer similar binding affinity and/or function as the proteins from which they were derived, with the same amino acid sequences that confer specificity, while retaining stability and solubility akin to small molecule therapeutics. . ."

Comments (11) + TrackBacks (0) | Category: Biological News

May 20, 2010

A Synthetic Genome; A New Species

Email This Entry

Posted by Derek

As had been widely expected, Craig Venter's team has announced the production of an organism with a synthetic genome. All the DNA in these new mycoplasma cells was made first on synthesizer machines (in roughly 6 KB stretches), then assembled first enzymatically and finally in yeast into working chromosomes.
mycoplasmajpg.jpg
And we know that they work, because they then transplanted them into mycoplasma and ended up with a new species. The cells grow normally, with the same morphology as wild-type, and sequencing them shows only the synthetic genome - which, interestingly, has several "watermark" sequences imbedded in it, a practice that this team strongly recommends future researchers in this area follow. In this case, there's a coded version of the names of the team members, a URL, and an e-mail address if you manage to decipher things.

Nothing about this process was trivial - the team apparently worked for months on just the last genomic transplantation step until things finally lined up right. But there's been a lot learned by this effort, and the next ones will be easier. I'm not sure if I call this a synthetic organism or not, since the cytoplasm (and all its machinery) was already there. But whatever it is, it sure has a synthetic genome, designed on a screen and built by machine. And it works, and more will surely follow. Will 2010, looking back, be the year that things changed?

Comments (28) + TrackBacks (0) | Category: Biological News

April 30, 2010

Rosetta@Home

Email This Entry

Posted by Derek

Many readers will have heard of Rosetta@Home. It's a distributed-computing approach to protein folding problems, which is certainly an area that can absorb all the floating-point operations you can throw at it. It's run from David Baker's lab at the University of Washington, and has users all over the world contributing.

A reader sends along news that recently the project seems to have come across a good hit in one of their areas, proteins designed to bind to the surface of influenza viruses. It looks like they have one with tight binding to an area of the virus associated with cell entry, so the next step will be to see if this actually prevents viral infection in a cell assay.

At that point, though, I have to step in as a medicinal chemist and ask what the next step after that could be. It won't be easy to turn that into any sort of therapy, as Prof. Baker makes clear himself:

Being able to rapidly design proteins which bind to and neutralize viruses and other pathogens would definitely be a significant step towards being able to control future epidemics. However, in itself it is not a complete solution because there is a problem in making enough of the designed proteins to give to people--each person would need a lot of protein and there are lots of people!

We are also working on designing new vaccines, but the flu virus binder is not a vaccine, it is a virus blocker. Vaccines work by mimicking the virus so your body makes antibodies in advance that can then neutralize the virus if you get infected later. the designed protein, if you had enough of it, should block the flu virus from getting into your cells after you had been exposed; a vaccine cannot do this.

One additional problem is that the designed protein may elicit an antibody response from people who are treated with it. in this case, it could be a one time treatment but not used chronically.

The immune response is definitely a concern, but that phrase "If you had enough of it" is probably the big sticking point. Most proteins don't fare so well when dosed systemically, and infectious disease therapies are notorious for needing whopping blood levels to be effective. At the same time, there's Fuzeon (enfuvirtide), a good-sized peptide drug (26 amino acids) against HIV cell entry. It was no picnic to develop, and its manufacturing was such an undertaking that it may have changed the whole industry, but it is out there.

My guess is that Rosetta@Home is more likely to make a contribution to our knowledge of protein folding, which could be broadly useful. More specifically, I'd think that vaccine design would be a more specific place that the project could come up with something of clinical interest. These sorts of proteins, though, probably have the lowest probability of success. The best I can see coming out of them is more insight into protein-protein interfaces - which is not trivial, for sure, but it's not the next thing to an active drug, either.

Comments (9) + TrackBacks (0) | Category: Biological News | Drug Development | Infectious Diseases

April 29, 2010

Curse of the Plastic Tubes

Email This Entry

Posted by Derek

In keeping with the problem discussed here ("sticky containers"), there's a report that a lot of common spectrometric DNA assays may have been affected by leaching of various absorbing contaminants from plastic labware. If the published work is shown relative to control tubes, things should be (roughly) OK, but if not, well. . .who knows? Especially if the experiments were done using the less expensive tubes, which seem to be more prone to emitting gunk.

We take containers for granted in most lab situations, but we really shouldn't. Everything - all the plastics, all the types of glass, all the metals - is capable of causing trouble under some conditions. And it tends to sneak up on us when it happens. (Of course, there are more, well, noticeable problems with plastics in the organic chemistry lab, but that's another story. Watch out for the flying cork rings!)

Comments (12) + TrackBacks (0) | Category: Biological News | Life in the Drug Labs

The Scent of Food Is Enough?

Email This Entry

Posted by Derek

Here's something I never knew: odors can regulate lifespan. Well, in fruit flies, anyway - a group at Baylor published results in 2007 showing that exposure to food-derived odors (yeast smells, in the case of Drosophila) partially cancels out the longevity-inducing effects of caloric restriction. Normally fed flies showed no effect.

That 2007 paper identified a specific sensory receptor (Or83b) as modulating the effect of odor on lifespan. Now comes a report that another receptor has been tracked down in this case, the G-protein coupled Gr63a. Flies missing this particular olfactory GPCR no longer show the lifespan sensitivity to yeast odors. This narrows things down. Or83b mutations seem to broadly affect sensory response in flies, but this is a much more specific receptor, just one of a great many similar ones:

"Unlike previous reports involving more general olfactory manipulations, extended longevity via loss of Gr63a occurs through a mechanism that is likely independent of dietary restriction. We do, however, find that Gr63a is required for odorants from live yeast to affect longevity, suggesting that with respect to lifespan, CO2 is an active component of this complex odor. Because Gr63a is expressed in a highly specific population of CO2-sensing neurons (the ab1C neurons) that innervate a single glomerulus in the antennal lobe (the V glomerulus), these data implicate a specific sensory cue and its associated neurosensory circuit as having the ability to modulate fly lifespan and alter organismal stress response and physiology. Our results set the stage for the dissection of more complex neurosensory and neuroendocrine circuits that modulate aging in Drosophila. . ."

It's going to be very interesting to follow that neuronal pathway - I've no idea where it will lead, but we're bound to learn something worthwhile. To make a wild generalization straight up to humans, this makes me wonder about people who are practicing caloric restriction on themselves - they're still exposed to food odors all the time, right? Does the same reversal apply? For me, I think that the scent of barbecue and fried catfish might be enough to do it right there, but keep in mind that I'm from Arkansas. Your mileage may vary.

Comments (10) + TrackBacks (0) | Category: Aging and Lifespan | Biological News

April 28, 2010

Homemade Morphine?

Email This Entry

Posted by Derek

I wrote here some time ago about human cells actually making their own morphine - real morphine, the kind that everyone thought was only produced in poppy plants. Now there's a paper in PNAS where various deuterium-labeled precursors of morphine were dosed in rats, and in each case they converted it to the next step in the known biosynthesis. The yields were small, since each compound was metabolically degraded as well, but it appears that rats are capable of all steps of a morphine synthesis from at least the isoquinoline compound tetrahydropapaveroline (THP).

And that's pretty interesting, because it's also been established that rats have small THP in their brains and other tissues - as do humans. And humans, it appears, almost always have trace amounts of morphine in the urine - which leads one to think that our bodies may well, in fact, be making it themselves.

Why that's happening is quite another question, and where the THP comes from is another one. Working under the assumption that all this machinery is not just there for the heck of it, you also wonder if this system could be the source of one or more drug targets (I spoke about that possibility here). What you probably don't want to assume is that these targets would necessarily have to do with pain. We still don't know if there's room to work in here. But it's worth thinking about, if (for no other reason) to remind ourselves that there are plenty of things going on inside the human body that we don't understand at all.

Comments (5) + TrackBacks (0) | Category: Biological News | The Central Nervous System

April 27, 2010

Masses of Data, In Every Sample

Email This Entry

Posted by Derek

I've said several times that I think that mass spectrometry is taking over the analytical world, and there's more evidence of that in Angewandte Chemie. A group at Justus Liebig University in Giessen has built what has to be the finest imaging mass spec I've ever seen. It's a MALDI-type machine, which means that a small laser beam does the work of zapping ions off the surface of the sample. But this one has better spatial resolution than anything reported so far, and they've hooked it up to a very nice mass spec system on the back end. The combination looks to me like something that could totally change the way people do histology.

For the non-specialist readers in the audience, mass spec is a tremendous workhorse of analytical chemistry. Basically, you use any of a whole range of techniques (lasers, beams of ions, electric charges, etc.) to blast individual molecules (or their broken parts!) down through a chamber and determine how heavy each one is. Because molecular weights are so precise, this lets you identify a lot of molecules by both their whole weights - their "molecular ions" - and by their various fragments. Imagine some sort of crazy disassembler machine that rips things - household electronic gear, for example - up into pieces and weighs every chunk, occasionally letting a whole untouched unit through. You'd see the readouts and say "Ah-hah! Big one! That was a plasma TV, nothing else is up in that weight range. . .let's see, that mix of parts coming off it means that it must have been a Phillips model so-and-so; they always break up like that, and this one has the heavier speakers on it." But mass spec isn't so wasteful, fortunately: it doesn't take much sample, since there are such gigantic numbers of molecules in anything large enough to see or weigh.
MS%20image.jpg
Take a look at this image. That's a section of a mouse pituitary gland - on the right is a standard toluidine-blue stain, and on the left is the same tissue slice as imaged (before staining) by the mass spec. The green and blue colors are two different mass peaks (826.5723 and 848.5566, respectively), which correspond to different types of phospholipid from the cell membranes. (For more on such profiling, see here). The red corresponds to a mass peak for the hormone vasopressin. Note that the difference in phospholipid peaks completely shows the difference between the two lobes of the gland (and also shows an unnamed zone of tissue around the posterior lobe, which you can barely pick up in the stained preparation). The vasopressin is right where it's supposed to be, in the center of the posterior lobe.

One of the most interesting things about this technique is that you don't have to know any biomarkers up front. The mass spec blasts away at each pixel's worth of data in the tissue sample and collects whatever pile of varied molecular-weight fragments that it can collect. Then the operator is free to choose ions that show useful contrasts and patterns (I can imagine software algorithms that would do the job for you - pick two parts of an image and have the machine search for whatever differentiates them). For instance, it's not at all clear (yet) why those two different phospholipid ions do such a good job at differentiating out the pituitary lobes - what particular phospholipids they correspond to, why the different tissues have this different profile, and so on. But they do, clearly, and you can use that to your advantage.

As this technique catches on, I expect to see large databases of mass-based "contrast settings" develop as histologists find particularly useful readouts. (Another nice feature is that one can go back to previously collected data and re-process for whatever interesting things are discovered later on). And each of these suggests a line of research all its own, to understand why the contrast exists in the first place.
Ductal%20tissue.jpg
The second image shows ductal carcinoma in situ. On the left is an optical image, and about all you can say is that the darker tissue is the carcinoma. The right-hand image is colored by green (mass of 529.3998) and red (mass of 896.6006), which correspond to healthy and cancerous tissue, respectively (and again, we don't know why, yet). But look closely and you can see that some of the dark tissue in the optical image doesn't actually appear to be cancer - and some of dark spots in the lighter tissue are indeed small red cells of trouble. We may be able to use this technology to diagnose cancer subtypes more accurately than ever before - the next step will be to try this on a number of samples from different patients to see how much these markers vary. I also wonder if it's possible to go back to stored tissue samples and try to correlate mass-based markers with the known clinical outcomes and sensitivities to various therapies.

I'd also be interested in knowing if this technique is sensitive enough to find small-molecule drugs after dosing. Could we end up doing pharmacokinetic measurements on a histology-slide scale? Ex vivo, could we possibly see uptake of our compounds once they're applied to a layer of cells in tissue culture? Oh, mass spec imaging has always been a favorite of mine, and seeing this level of resolution just brings on dozens of potential ideas. I've always had a fondness for label-free detection techniques, and for methods that don't require you to know too much about the system before being able to collect useful data. We'll be hearing a lot more about this, for sure.

Update: I should note that drug imaging has certainly been accomplished through mass spec, although it's often been quite the pain in the rear. It's clearly a technology that's coming on, though.

Comments (9) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Drug Assays

April 8, 2010

Let's Sequence These Guys

Email This Entry

Posted by Derek

A very weird news item: multicellular organisms that appear to be able to live without oxygen. They're part of the little-known (and only recently codified) phylum Loricifera, and these particular organisms were collected at the bottom of the Mediterranean, in a cold, anoxic, hypersaline environment.

They have no mitochondria - after all, they don't have any oxygen to work with. Instead, they have what look like hydrogenosome organelles, producing hydrogen gas and ATP from pyruvate. I'm not sure how large an organism you can run off that sort of power source, since it looks like you only get one ATP per pyruvate (as opposed to two via the Krebs cycle), but the upper limit has just been pushed past a significant point.

Comments (3) + TrackBacks (0) | Category: Biological News | General Scientific News | Life As We (Don't) Know It

ACC2: Great Metabolic Target, Or Total Bust?

Email This Entry

Posted by Derek

For people who've done work on metabolic disease, this paper in PNAS may come as a surprise, although there was a similar warning in January of this year. Acetyl CoA-carboxylase 2 (ACC2) has been seen for some years as a target in that area. It produces malonyl CoA, which is a very important intermediate and signaling molecule in fatty acid metabolism (and other places as well). A number of drug companies have taken a crack at getting good chemical matter (I'm no stranger to it myself, actually). A lot of the interest was sparked by reports of the gene knockout mice, which seem to have healthy appetites but put on no weight. The underlying reason was thought to be that fatty acid oxidation had been turned up in their muscle and adipose tissue - and a new way to burn off excess lipids sounded like something that a lot of people with excess weight and/or dyslipidemia might be able to use. What's more, the ACC2 knockout mice also seemed to be protected from developing insulin resistance, the key metabolic problem in type II diabetes. An ACC2 inhibitor sounds like just the thing.

Well, this latest paper sows confusion all over that hypothesis. The authors report having made some selective ACC2 knockout mouse strains of their own. If the gene is inactivated only in muscle tissue, the animals show no differences at all in body weight, composition, or food intake compared to control mice. What's more, when they went back and inactivated ACC2 in the whole animal, they found the same no-effect result, whether the animals were fed on standard chow or a high-fat diet. The muscle tissue in both cases showed no sign of elevated fatty acid oxidation. The authors state drily that "The limited impact of Acc2 deletion on energy balance raises the possibility that selective pharmacological inhibition of Acc2 for the treatment of obesity may be ineffective."

Yes, yes, it does. There's always the possibility that some sort of compensating mechanism kicked in as the knockout animals developed, something that might not be available if you just stepped into an adult animal with an inhibiting drug. That's always the nagging doubt when you see no effect in a knockout mouse. But considering that those numerous earlier reports of knockout mice showed all kinds of interesting effects, you have to wonder just what the heck is going on here.

Well, the authors of the present paper are wondering the same thing, as are, no doubt, the authors of that January Cell Metabolism work. They saw no differences in their knockout animals, either, which started the rethinking of this whole area. (To add to the confusion, those authors reported seeing real differences in fatty acid oxidation in the muscle tissue of their animals, even though the big phenotypic changes couldn't be replicated). Phrases like "In stark contrast to previously published data. . ." make their appearance in this latest paper.

The authors do suggest one possible graceful way out. The original ACC2 knockout mice were produced somewhat differently, using a method that could have left production of a mutated ACC2 protein intact (without its catalytic domain). They suggest that this could possibly have some sort of dominant-negative effect. If there's some important protein-protein interaction that was wiped out in the latest work, but left intact in the original report, that might explain things - and if that's the case, then there still might be room for a small molecule inhibitor to work. But it's a long shot.

The earlier results originated from the lab of Salih Wakil at Baylor (who filed a patent on the animals), and he's still very much active in the area. One co-author, Gerry Shulman at Yale, actually spans both reports of ACC2 knockout mice - he was in on one of the Wakil papers, and on this one, too. His lab is very well known in diabetes and metabolic research, and while I'd very much like to hear his take on this whole affair, I doubt if we're going to see that in public.

Comments (14) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity

April 5, 2010

Rapamycin for Alzheimer's?

Email This Entry

Posted by Derek

Last summer a paper was published (PDF) showing rapamycin dosing appeared to lengthen lifespan in mice. (In that second link, I went more into the background of rapamycin and TOR signaling, for those who are interested). Now comes word that it also seems to prevent cognitive deficits in a mouse model of Alzheimer's.

The PDAPP mice have a mutation in their amyloid precursor protein associated with early-onset familiar Alzheimer's in humans, and it's a model that's been used for some years now in the field. It's not perfect, but it's not something you can ignore, either, and the effects of rapamycin treatment do seem to be significant. (The paper uses the same dose that was found to extend lifespan). The hypothesis is that rapamycin allowed increase autophagy (protein digestion) to take place in the brain, helping to clear out amyloid plaques.

What I also found interesting, though, was the rapamycin-fed non-transgenic control animals. In each case, they seem to show a trend for increased performance in the various memory tests, although they don't quite reach significance. This makes me wonder what the effects in humans might be, Alzheimer's or not. After that lifespan report last year, it wouldn't surprise me to find out that some people are taking the stuff anyway, but it's not going to be anywhere near enough of a controlled setting for us to learn anything.

This report is definitely going to start a lot of people thinking about experimenting with rapamycin for Alzheimer's - there are a lot of desperate patients and relatives out there. But together with that lifespan paper, it might also start some people thinking about it whether they're worried about Alzheimer's or not.

Comments (16) + TrackBacks (0) | Category: Aging and Lifespan | Alzheimer's Disease | Biological News

Sickened by an Engineered Virus?

Email This Entry

Posted by Derek

What to make of the case of Becky McClain? She's a former Pfizer scientist who sued the company, claiming that she had been injured by exposure to engineered biological materials at work. She's just won her case in court, although Pfizer may well appeal the verdict. It's important to note that her most damaging claim, that the company engaged in willful misconduct, was thrown out at the beginning. The jury found that Pfizer had violated whistleblower laws and wrongfully terminated McClain as an employee.

But what I'd most like to know is whether the claim at the core of her case is true, and I don't think anyone knows that yet. McClain says that she was exposed to embryonic stem cells and to various engineered lentiviruses (due to poor lab technique on the part of co-workers, if I'm following the story correctly), and that this gave her a chronic, debilitating condition that has led to intermittent paralysis. More specifically, the theory that I've seen her legal team floating is that the lentivirus caused her tissues to express a new potassium channel, and that she has improved after taking "massive doses" of potassium. (Query: how massive are we talking here?).

Now, that's a potentially alarming thing. But that should also be potentially subject to scientific proof. This trial didn't address any of these issues, and McClain has been unable to get any traction with the court system or with OSHA on these claims. Looking around the internet, you find that some people are convinced that this is a cover-up, but (having seen OSHA in action) I'm more likely to think that if you can't get them to bite, then you probably don't have much for them to get their teeth into. I also note that the symptoms that have been described in this case are similar to many that have been ascribed in the past to psychosomatic illness. I can't say that that's what's going on here, of course, but it does complicate the issue.

The other problem I have is that such human illness from a biotech viral vector is actually a very rare event, with every case that I can think of being a deliberate attempt at gene therapy. Industry scientists don't work with human-infectious viruses without good cause, but there's still an awful lot of work that goes on with agents that most certainly can infect people (hepatitis and so on). And although I'm sure that there have been cases (accidental needle sticks and the like), I don't know of any research infections with wild-type viruses, much less engineered ones.

Well, we may yet hear more about this, and I'll rethink the issue if more information becomes available. But for now, I have to say, whatever the other issues in the case, I'm inclined to doubt the engineered-viral-infection part of this story.

Comments (18) + TrackBacks (0) | Category: Biological News

April 1, 2010

What Do Nanoparticles Really Look Like?

Email This Entry

Posted by Derek

We're all going to be hearing a lot about nanoparticles in the next few years (some may feel as if they've already heard quite enough, but there's nothing to be done about that). The recent report of preliminary siRNA results using them as a delivery system will keep things moving along with even more interest. So it's worth checking out this new paper, which illustrates how we're going to have to think about these things.

The authors show that it's not necessarily the carefully applied coat proteins of these nanoparticles that are the first thing a cell notices. Rather, it's the second sphere of endogenous proteins that end up associated with the particle, which apparently can be rather specific and persistent. The authors make their case with admirable understatement:

The idea that the cell sees the material surface itself must now be re-examined. In some specific cases the cell receptor may have a higher preference for the bare particle surface, but the time scale for corona unbinding illustrated here would still typically be expected to exceed that over which other processes (such as nonspecific uptake) have occurred. Thus, for most cases it is more likely that the biologically relevant unit is not the particle, but a nano-object of specified size, shape, and protein corona structure. The biological consequences of this may not be simple.

Update: fixed this post by finally adding the link to the paper!

Comments (4) + TrackBacks (0) | Category: Biological News | Pharmacokinetics

March 30, 2010

GeneVec's Pancreatic Cancer Therapy Crashes

Email This Entry

Posted by Derek

Another promising Phase II oncology idea goes into the trench in Phase III: GenVec has been working on a gene-therapy approach ("TNFerade") to induce TNF-alpha expression in tumors. That's not a crazy idea, by any means, although (as with all attempts at gene therapy) getting it to work is extremely tricky.

And so it has proved in this case. It's been a long, hard process finding that out, too. Over the years, the company has looked at TNFerade for metastatic melanoma, soft tissue sarcoma, and other cancers. They announced positive data back in 2001, and had some more encouraging news on pancreatic cancer in 2006 (here's the ASCO abstract on that one). But last night, the company announced that an interim review of the Phase III trial data showed that the therapy was not going to make any endpoint, and the trial was discontinued. Reports are that TNFerade is being abandoned entirely.

This is bad news, of course. I'd very much like gene therapy to turn into a workable mode of treatment, and I'd very much like for people with advanced pancreatic cancer to have something to turn to. (It's truly one of the worst diagnoses in oncology, with a five-year survival rate of around 5%). A lot of new therapeutic ideas have come up short against this disease, and as of yesterday, we can add another one to the list. And we can add another Promising in Phase II / Nothing in Phase III drug to the list, too, the second one this week. . .

Comments (8) + TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials

March 25, 2010

Nanoparticles and RNA: Now In Humans

Email This Entry

Posted by Derek

In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.

Nature has a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.

In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.

A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.

And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).

It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.

But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.

Comments (8) + TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials | Pharmacokinetics

March 19, 2010

A Bit More Garage Biotech

Email This Entry

Posted by Derek

Here's the sort of thing we'll be seeing more and more of - on the whole, I think it's a good development, but it's certainly possible that one's mileage could vary:

Ginkgo’s BioBrick Assembly Kit includes the reagents for constructing BioBrick parts, which are nucleic acid sequences that encode a specific biological function and adhere to the BioBrick assembly standard. The kit, which includes the instructions for putting those parts together, sells for $235 through the New England BioLabs, an Ipswich, MA-based supplier of reagents for the life sciences industry.

Shetty didn’t release any specific sales figures for the kit, but said its users include students, researchers, and industrial companies. The kit was also intended to be used in the International Genetically Engineered Machine competition (iGEM), in Cambridge, MA. The undergraduate contest, co-launched by Knight, challenges students teams to use the biological parts to build systems and operate them in living cells.

Comments (4) + TrackBacks (0) | Category: Biological News

March 17, 2010

BioTime's Cellular Aging Results

Email This Entry

Posted by Derek

A small company called BioTime has gotten a lot of attention in the last couple of days after a press release about cellular aging. To give you an idea of the company's language, here's a quote:


"Normal human cells were induced to reverse both the "clock" of differentiation (the process by which an embryonic stem cell becomes the many specialized differentiated cell types of the body), and the "clock" of cellular aging (telomere length)," BioTime reports. "As a result, aged differentiated cells became young stem cells capable of regeneration."

Hey, that sounds good to me. But when I read their paper in the journal Regenerative Medicine, it seems to be interesting work that's a long way from application. Briefly - and since I Am Not a Cell Biologist, it's going to be brief - what they're looking at is telomere length in various stem cell lines. Telomere length is famously correlated with cellular aging - below a certain length, senescence sets in and the cells don't divide any more.

What's become clear is that a number of "induced pluripotent" cell lines have rather short telomeres as compared to their embryonic stem cell counterparts. You can't just wave a wand and get back the whole embryonic phenotype; their odometers still show a lot of wear. The BioTime people induced in such cells a number of genes thought to help extend and maintain telomeres, in an attempt to roll things back. And they did have some success - but only by brute force.

The exact cocktail of genes you'd want to induce is still very much in doubt, for one thing. And in the cell line that they studied, five of their attempts quickly shed telomere length back to the starting levels. One of them, though, for reasons that are completely unclear, maintained a healthy telomere length over many cell divisions. So this, while a very interesting result, is still only that. It took place in one particular cell line, in ways that (so far) can't be controlled or predicted, and the practical differences between this one clone and other similar cells lines still aren't clear (although you'd certainly expect some). It's worthwhile early-stage research, absolutely - but not, to my mind, worth this.

Comments (4) + TrackBacks (0) | Category: Aging and Lifespan | Biological News | Business and Markets

March 15, 2010

Stem Cell Politics

Email This Entry

Posted by Derek

There have been complaints that something is going wrong in the publication of stem cell research. This isn't my field, so I don't have a lot of inside knowledge to share, but there appear to have been a number of researchers charging that journals (and their reviewers) are favoring some research teams over others:

The journal editor decides to publish the research paper usually when the majority of reviewers are satisfied. But professors Lovell-Badge and Smith believe that increasingly some reviewers are sending back negative comments or asking for unnecessary experiments to be carried out for spurious reasons.

In some cases they say it is being done simply to delay or stop the publication of the research so that the reviewers or their close colleagues can be the first to have their own research published.

"It's hard to believe except you know it's happened to you that papers have been held up for months and months by reviewers asking for experiments that are not fair or relevant," Professor Smith said.

You hear these sorts of complaints a lot - everyone who's had a paper turned down by a high-profile journal is a potential customer for the idea that there's some sort of backroom dealing going on for the others who've gotten in. But just because such accusations are thrown around frequently doesn't mean that they're never true. I hate to bring the topic up again, but the "Climategate" leaks illustrate just how this sort of thing can be done. Groups of researchers really can try to keep competing work from being published. I just don't know if it's happening in the stem cell field or not.

Comments (16) + TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature

March 12, 2010

The PSA Test for Prostate Cancer: Useless

Email This Entry

Posted by Derek

The discoverer of the prostate-specific antigen (Richard Ablin) has a most interesting Op-Ed in the New York Times. He's pointing out what people should already know: that using PSA as a screen for prostate cancer is not only useless, but actually harmful.

The numbers just aren't there, and Ablin is right to call it a "hugely expensive public health disaster". Some readers will recall the discussion here of a potential Alzheimer's test, which illustrates some of the problems that diagnostic screens can have. But that was for a case where a test seemed as if it might be fairly accurate (just not accurate enough). In the case of PSA, the link between the test and the disease hardly exists at all, at least for the general population. The test appears to have very little use in detecting prostate cancer, and early detection itself is notoriously unreliable as a predictor of outcomes in this disease.

The last time I had blood work done, I made a point of telling the nurse that she could check the PSA box if she wanted to, but I would pay no attention to the results. (I'd already come across Donald Berry's views on the test, and he's someone whose word I trust on biostatistics). I'd urge other male readers to do the same.

Comments (22) + TrackBacks (0) | Category: Biological News | Cancer

Garage Biotech

Email This Entry

Posted by Derek

Freeman Dyson has written about his belief that molecular biology is becoming a field where even basement tinkerers can accomplish things. Whether we're ready for it or not, biohacking is on its way. The number of tools available (and the amount of surplus equipment that can be bought) have him imagining a "garage biotech" future, with all the potential, for good and for harm, that that entails.

Well, have a look at this garage, which is said to be somewhere in Silicon Valley. I don't have any reason to believe the photos are faked; you could certainly put your hands on this kind of equipment very easily in the Bay area. The rocky state of the biotech industry just makes things that much more available. From what I can see, that's a reasonably well-equipped lab. If they're doing cell culture, there needs to be some sort of incubator around, and presumably a -80 degree freezer, but we don't see the whole garage, do we? I have some questions about how they do their air handling and climate control (although that part's a bit easier in a California garage than it would be in a Boston one). There's also the issue of labware and disposables. An operation like this does tend to run through a goodly amount of plates, bottles, pipet tips and so on, but I suppose those are piled up on the surplus market as well.

But what are these folks doing? The blog author who visited the site says that they're "screening for anti-cancer compounds". And yes, it looks as if they could be doing that, but the limiting reagent here would be the compounds. Cells reproduce themselves - especially tumor lines - but finding compounds to screen, that must be hard when you're working where the Honda used to be parked. And the next question is, why? As anyone who's worked in oncology research knows, activity in a cultured cell line really doesn't mean all that much. It's a necessary first step, but only that. (And how many different cell lines could these people be running?)

The next question is, what do they do with an active compound when they find one? The next logical move is activity in an animal model, usually a xenograft. That's another necessary-but-nowhere-near-sufficient step, but I'm pretty sure that these folks don't have an animal facility in the basement, certainly not one capable of handling immunocompromised rodents. So put me down as impressed, but puzzled. The cancer-screening story doesn't make sense to me, but is it then a cover for something else? What?

If this post finds its way to the people involved, and they feel like expanding on what they're trying to accomplish, I'll do a follow-up. Until then, it's a mystery, and probably not the only one of its kind out there. For now, I'll let Dyson ask the questions that need to be asked, from that NYRB article linked above:

If domestication of biotechnology is the wave of the future, five important questions need to be answered. First, can it be stopped? Second, ought it to be stopped? Third, if stopping it is either impossible or undesirable, what are the appropriate limits that our society must impose on it? Fourth, how should the limits be decided? Fifth, how should the limits be enforced, nationally and internationally? I do not attempt to answer these questions here. I leave it to our children and grandchildren to supply the answers.

Comments (42) + TrackBacks (0) | Category: Biological News | Drug Assays | General Scientific News | Regulatory Affairs | Who Discovers and Why

March 9, 2010

A GSK/Sirtris Wrap-Up

Email This Entry

Posted by Derek

Nature Biotechnology weighs in on the GSK/Sirtris controversy. They have a lot of good information, and I'm not just saying that because someone there has clearly read over the comments that have showed up to my posts on the subject. The short form:

The controversy over Sirtris drugs reached a tipping point in January with a publication by Pfizer researchers led by Kay Ahn showing that resveratrol activates SIRT1 only when linked to a fluorophore. Although Ahn declined to be interviewed by Nature Biotechnology, a statement issued by Pfizer says the group's findings “call into question the mechanism of action of resveratrol and other reported activators of the SIRT1 enzyme.”

Most experts, however, say it's too soon to write off Sirtris' compounds altogether, assuming they're clinically useful by mechanisms that don't involve sirtuin binding. And for its part, GSK won't concede that Sirtris' small molecules don't bind the targets. In an e-mailed statement, Ad Rawcliffe, head of GSK's WorldWide Business Development group, says, “There is nothing that has happened to date, including the publication [by Pfizer,] that suggests otherwise.”

We'll see if GSK and Sirtris have some more publications ready to silence their detractors. But what will really do that, and what we'll all have to wait for, are clinical results.

Comments (23) + TrackBacks (0) | Category: Aging and Lifespan | Biological News

March 5, 2010

Your Own Personal Bacteria

Email This Entry

Posted by Derek

There's a report in Nature on the bacteria found in the human gut that's getting a lot of press today (especially for a paper about, well, bacteria in the human gut). A team at the Beijing Genomics Institute, with many collaborators, has done a large shotgun sequencing effort on gut flora and identified perhaps one thousand different species.

I can well believe it. The book I recommended the other day on bacteria field marks has something to say about that, pointing out that if you're just counting cells, that the cells of our body are far outnumbered by the bacteria we're carrying with us. Of course, the bacteria have an advantage, being a thousand times smaller (or more) than our eukaryotic cells, but there's no doubt that we're never alone. In case you're wondering, the average European subject of the study probably carries between 150 and 200 different types of bacteria, so there's quite a bit of person-to-person variability. Still, a few species (mostly Bacteroides varieties) were common to all 124 patients in the study, while the poster child for gut bacteria (E. coli) is only about halfway down the list of the 75 most common organisms. We have some Archaea, too, but they're outnumbered about 100 to 1.

What's getting all the press is that idea that particular mixtures of intestinal bacteria might be contributing to obesity, cancer, Crohn's disease and other conditions. This isn't a new idea, although the new study does provide more data to shore it up (which was its whole purpose, I should add). It's very plausible, too: we already know of an association between Helicobacter and stomach cancer, and it would be surprising indeed if gut bacteria weren't involved with conditions like irritable bowel syndrome or Crohn's. This paper confirms earlier work that such patients do indeed have distinctive microbiota, although it certainly doesn't solve the cause-or-effect tangle that such results always generate.

The connection with obesity is perhaps more of a stretch. You can't argue with thermodynamics. Clearly, people are obese because they're taking in a lot more calories than they're using up, and doing that over a long period. So what do bacteria have to do with that? The only thing I can think of is perhaps setting off inappropriate food cravings. We're going to have to be careful with that cause and effect question here, too.

One problem I have with this work, though, is the attitude of the lead author on the paper, Wang Jun. In an interview with Reuters, he makes a very common mistake for an academic: assuming that drug discovery and treatment is the easy part. After all, the tough work of discovery has been done, right?

"If you just tackle these bacteria, it is easier than treating the human body itself. If you find that a certain bug is responsible for a certain disease and you kill it, then you kill the disease," Wang said

For someone who's just helped sequence a thousand of them, Wang doesn't have much respect for bacteria. But those of us who've tried to discover drugs against them know better. Where are these antibiotics that kill single species of bacteria? No such thing exists, to my knowledge. To be sure, we mostly haven't looked, since the need is for various broader-spectrum agents, but it's hard to imagine finding a compound that would kill off one Clostridium species out of a bunch. And anyway, bacteria are tough. Even killing them off wholesale in a human patient can be very difficult.

Even if we magically could do such things, there's the other problem that we have no idea of which bacterial strains we'd want to adjust up or down. The Nature paper itself is pretty good on this topic, emphasizing that we really don't know what a lot of these bacteria are doing inside us and how they fit into what is clearly a very complex and variable ecosystem. A look at the genes present in the samples shows the usual common pathways, then a list that seem to be useful for survival in the gut (adhesion proteins, specific nutrient uptake), and then a massive long tail of genes that do we know not what nor why. Not only do we not know what's happening on other planets, or at the bottom of our own oceans, we don't even know what's going on in our own large intestines. It's humbling.

Dr. Wang surely realizes this; I just wish he'd sound as if he does.

Comments (25) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases

March 2, 2010

The Plasmid Committee Will See You Now

Email This Entry

Posted by Derek

From Nature comes word of a brainlessly restrictive new law that's about to pass in Turkey. The country started out trying to get in line with EU regulations on genetically-modified crops, and ended up with a bill that forbids anyone to modify the DNA of any organism at all - well, unless you submit the proper paperwork, that is:

. . .Every individual procedure would have to be approved by an inter-ministerial committee headed by the agriculture ministry, which is allowed 90 days to consider each application with the help of experts.

The committee would be responsible for approving applications to import tonnes of GM soya beans for food — but also for every experiment involving even the use of a standard plasmid to transfer genes into cells. Work with universally used model organisms, from mice and zebrafish to fruitflies and bacteria, would be rendered impossible. Even if scientists could afford to wait three months for approval of the simplest experiment, the committee would be overwhelmed by the number of applications. One Turkish scientist who has examined the law estimates that his lab alone would need to submit 50 or so separate applications in a year.

It's no doubt coming as a surprise to them that biologists modify the DNA of bacteria and cultured mammalian cells every single day of the week. Actually, it might come as a surprise to many members of the public, too - we'll see if this becomes a widespread political issue or not. . .

Comments (9) + TrackBacks (0) | Category: Biological News | Regulatory Affairs

February 18, 2010

Biology By the Numbers

Email This Entry

Posted by Derek

I've been meaning to write about this paper in PNAS for a while. The authors (from Cal Tech and the Weizmann Institute) have set up a new web site, are calling for a more quantitative take on biological questions. They say that modern techniques are starting to give up meaningful inputs, and that we're getting to the point where this perspective can be useful. A web site, Bionumbers, has been set up to provide ready access to data of this sort, and it's well worth some time just for sheer curiosity's sake.

But there's more than that at work here. To pick an example from the paper, let's say that you take a single E. coli bacterium and put it into a tube of culture medium, with only glucose as a carbon source. Now, think about what happens when this cell starts to grow and divide, but think like a chemist. What's the limiting reagent here? What's the rate-limiting step? Using the estimates for the size of a bacterium, its dry mass, a standard growth rate, and so on, you can arrive at a rough figure of about two billion sugar molecules needed per cell division.

Of course, bacteria aren't made up of glucose molecules. How much of this carbon got used up just to convert it to amino acids and thence to proteins (the biggest item on the ledger by far, it turns out), to lipids, nucleic acids, and so on? What, in other words, is the energetic cost of building a bacterium? The estimate is about four billion ATPs needed. Comparing that to those two billion sugar molecules, and considering that you can get up to 30 ATPs per sugar under aerobic conditions, and you can see that there's a ten to twentyfold mismatch here.

Where's all the extra energy going? The best guess is that a lot of it is used up in keeping the cell membrane going (and keeping its various concentration potentials as unbalanced as they need to be). What's interesting is that a back-of-the-envelope calculation can quickly tell you that there's likely to be some other large energy requirement out there that you may not have considered. And here's another question that follows: if the cell is growing with only glucose as a carbon source, how many glucose transporters does it need? How much of the cell membrane has to be taken up by them?

Well, at the standard generation time in such media of about forty minutes, roughly 10 to the tenth carbon atoms need to be brought in. Glucose transporters work at a top speed of about 100 molecules per second. Compare the actual surface area of the bacterial cell with the estimated size of the transporter complex. (That's about 14 square nanometers, if you're wondering, and thinking of it in those terms gives you the real flavor of this whole approach). At six carbons per glucose, then, it turns out that roughly 4% of the cell surface must taken up with glucose transporters.

That's quite a bit, actually. But is it the maximum? Could a bacterium run with a 10% load, or would another rate-limiting step (at the ribosome, perhaps?) make itself felt? I have to say, I find this manner of thinking oddly refreshing. The growing popularity of synthetic biology and systems biology would seem to be a natural fit for this kind of thing.

It's all quite reminiscent of the famous 2002 paper (PDF) "Can A Biologist Fix a Radio", which called (in a deliberately provocative manner) for just such thinking. (The description of a group of post-docs figuring out how a radio works in that paper is not to be missed - it's funny and painful/embarrassing in almost equal measure). As the author puts it, responding to some objections:

One of these arguments postulates that the cell is too complex to use engineering approaches. I disagree with this argument for two reasons. First, the radio analogy suggests that an approach that is inefficient in analyzing a simple system is unlikely to be more useful if the system is more complex. Second, the complexity is a term that is inversely related to the degree of understanding. Indeed, the insides of even my simple radio would overwhelm an average biologist (this notion has been proven experimentally), but would be an open book to an engineer. The engineers seem to be undeterred by the complexity of the problems they face and solve them by systematically applying formal approaches that take advantage of the ever-expanding computer power. As a result, such complex systems as an aircraft can be designed and tested completely in silico, and computer-simulated characters in movies and video games can be made so eerily life-like. Perhaps, if the effort spent on formalizing description of biological processes would be close to that spent on designing video games, the cells would appear less complex and more accessible to therapeutic intervention.

But I'll let the PNAS authors have the last word here:

"It is fair to wonder whether this emphasis on quantification really brings anything new and compelling to the analysis of biological phenomena. We are persuaded that the answer to this question is yes and that this numerical spin on biological analysis carries with it a number of interesting consequences. First, a quantitative emphasis makes it possible to decipher the dominant forces in play in a given biological process (e.g., demand for energy or demand for carbon skeletons). Second, order of magnitude BioEstimates merged with BioNumbers help reveal limits on biological processes (minimal generation time or human-appropriated global net primary productivity) or lack thereof (available solar energy impinging on Earth versus humanity’s demands). Finally, numbers can be enlightening by sharpening the questions we ask about a given biological problem. Many biological experiments report their data in quantitative form and in some cases, as long as the models are verbal rather than quantitative, the theor y will lag behind the experiments. For example, if considering the input–output relation in a gene-regulatory net work or a signal- transduction network, it is one thing to say that the output goes up or down, it is quite another to say by how much.
.

Comments (34) + TrackBacks (0) | Category: Biological News | Who Discovers and Why

January 27, 2010

Enzymes and Fluorines

Email This Entry

Posted by Derek

It hit me, one day during my graduate career, that I was spending my nights, days, weekends, and holidays trying to make a natural product, while the bacterium that produced the thing in the first place was sitting around in the dirt of a Texas golf course, making the molecule at ambient temperature in water and managing to perform all its other pressing business at the same time. This put me in my place. I've respected biosynthesis ever since.

But there are some areas where we humans can still outproduce the small-and-slimies, and one of those is in organofluorine compounds. Fluorine's a wonderful element to use in medicinal chemistry, since it alters the electronic properties of your molecule without changing its shape (or adding much weight), and the C-F bond is metabolically inert. But those very properties can make fluorination a tricky business. If you can displace a leaving group with fluoride ion to get your compound, then good for you. Too often, though, those charges are the wrong way around, and electrophilic fluorination is the only solution. There are heaps of different ways to do this in the literature, which is a sign to the experienced chemist that there are no general methods to be had. (That's one of my Laws of the Lab, actually). The reagents needed for these transformations start with a few in the Easily Dealt With category, wind entertainingly through the Rather Unusual, and rapidly pile up over at the Truly Alarming end.

But at least we can get some things to work. The natural products with fluorine in them can be counted on the fingers. A fluorinase enzyme has been isolated which does the biotransformation on 4-fluorothreonine S-adenosyl methionine (using fluoride ion, naturally - if an enzyme is ever discovered that uses electrophilic F-plus as an intermediate, I will stand at attention and salute it). And now comes word that this has been successfully engineered into another bacterial species, and used to produce a fluorine analog of that bacterium's usual organochlorine natural product.

It isn't pretty, but it does work. One big problem is that the fluoride ion the enzyme needs is toxic to the rest of the organism, so you can't push this system too hard. But the interest in this sort of transformation is too high (and the potential stakes too lucrative) to keep it from being obscure forever. Bring on the fluorinating enzymes!

Comments (11) + TrackBacks (0) | Category: Biological News

January 22, 2010

Receptors, Moving and Shaking

Email This Entry

Posted by Derek

I've written here before about how I used to think that I understood G-protein coupled receptors (GPCRs), but that time and experience have proven to me that I didn't know much of anything. One of the factors that's complicated that field is the realization that these receptors can interact with each other, forming dimers (or perhaps even larger assemblies) which presumably are there for some good reason, and can act differently from the classic monomeric form.
M1%20receptors.jpg
A neat paper has appeared in PNAS that gives us some quantitative numbers on this phenomenon, and some great pictures as well. What you're looking at is a good ol' CHO cell, transfected with muscarinic M1 receptors. Twenty years ago (gulp) I was cranking out compounds to tickle cell membranes of this exact type, among others. The receptors are visualized by a fluorescent ligand (telenzepine), and the existence of dimers can be inferred by the "double-intensity" spots shown in the inset.

With this kind of resolution and time scale, the UK team that did this work could watch the receptors wandering over the cell surface in real time. It's a classic random walk, as far as they can tell. Watching the cohort of high-intensity spots, they can see changes as they switch to lower-intensity monomers and back again. Over a two-second period, it appeared that about 81% of the tracks were monomers, 9% were dimers, and 3% changed over during the tracking. (The remaining 7% were impossible to assign with confidence, which makes me wonder what's lurking down there).

They refined the technique by using two differently-fluorescent forms of labeled telenzepine, labeling the cells in a 50/50 ratio, and watching what happens to the red, green, (and combined yellow) spots over time. It looks as if the receptor population is a steady-state mix of monomers and dimers, exchanging on a time scale of seconds. Of course, the question comes up of how different ligands might affect this process, and you could begin to answer that with different fluorescent species. But since the technique depends on having a low-off-rate species bound to the receptor in order to see it, some of the most interesting dynamic questions will have to wait. It's still very nice to actually see these things, though; it gives a medicinal chemist something to picture. . .

Comments (7) + TrackBacks (0) | Category: Biological News

Maybe You Need Some More Testosterone Over There

Email This Entry

Posted by Derek

This one's also from the Department of Placebo Effects - read on. An interesting paper out in Nature details a study where volunteers took small doses of testosterone or placebo, and then participated in a standard behavioral test, the "Ultimatum Game". That's the one where two people participate, with one of them given a sum of money (say, $10), that's to be divided between the two of them. The player with the money makes an offer to divide the pot, which the other player can only take or leave (no counteroffers). A number of interesting questions about altruism and competition have been examined through this game and its variants - basically, the first thing to ask is how much the "dictator" player will feel like offering at all. (If you like, here's the Freakonomics guys talking about the game, which features in a chapter of their latest, SuperFreakonomics).

What's been found in many studies is that the second players often reject offers that they feel are insultingly low, giving up a sure gain for the sake of pride and sending a message to the first player. I think of this as the "Let me tell you what you can do with your buck-fifty" option. So what does exposure to testosterone do for this behavior? As the authors of the new paper talk about, there are two (not necessarily exclusive) theories about some of the hormone's effects. Increases in aggression and competitiveness are widely thought to be one of these, but there's also a good amount of literature to suggest that status-seeking behavior is perhaps more important. But if someone is going to be aggressive about the ultimatum game, they're going to make a lowball offer and damn the consequences, whereas if they're looking for status, they may well choose a course that avoids having their offer thrown back in their face.

Using known double-blind conditions for testosterone dosing in female subjects (sublingual dosing four hours before the test), the second behavior was observed. Update: keep in mind, women have endogenous testosterone, too. The subjects who got testosterone made more generous offers (from about $3.50 to closer to $4.00). The error bars on that measurement just miss overlapping, p = 0.031. But here's the part I found even more interesting: the subjects who believed that they got testosterone made significantly less fair/generous offers than the ones who believed that they got the placebo (P = 0.006). Because, after all, testosterone makes you all tough and nasty, as everyone knows. As the authors sum it up:

"The profound impact of testosterone on bargaining behaviour supports the view that biological factors have an important role in human social interaction. This does, of course, not mean that psychological factors are not important. In fact, our finding that subjects’ beliefs about testosterone are negatively associated with the fairness of bargaining offers points towards the importance of psychological and social factors. Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour. Our findings also teach an important methodological lesson for future studies: it is crucial to control for subjects’ beliefs because the pure substance effect may be otherwise under- or overestimated. . ."

Comments (13) + TrackBacks (0) | Category: Biological News | General Scientific News | The Central Nervous System

January 21, 2010

An Enzyme Inhibitor You Have Never, Ever, Considered

Email This Entry

Posted by Derek

I promise you that. Take a look at this abstract:

". . .an unappreciated physicochemical property of xenon has been that this gas also binds to the active site of a series of serine proteases. Because the active site of serine proteases is structurally conserved, we have hypothesized and investigated whether xenon may alter the catalytic efficiency of tissue-type plasminogen activator (tPA), a serine protease that is the only approved therapy for acute ischemic stroke today."

They go on to provide evidence that xenon is indeed a tPA inhibitor. And as it turns out, there's more evidence for xenon having a number of physiological effects, and enzyme inhibition has been proposed as one mechanism. Who knew?

Now, there's an SAR challenge. . .

Comments (19) + TrackBacks (0) | Category: Biological News

January 18, 2010

Correlations, Lovely Correlations

Email This Entry

Posted by Derek

Anyone looking over large data sets from human studies needs to be constantly on guard. Sinkholes are everywhere, many of them looking (at first glance) like perfectly solid ground on which to build some conclusions. This, to be honest, is one of the real problems with full release of clinical trial data sets: if you're not really up on your statistics, you can convince yourself of some pretty strange stuff.

Even people who are supposed to know what they're doing can bungle things. For instance, you may well have noticed a lot of papers coming out in the last few years correlating neuroimaging studies (such as fMRI) with human behaviors and personality traits. Neuroimaging is a wonderfully wide-open, complex, and important field, and I don't blame people for a minute for pushing it as far as it can go. But just how far is that?

A recent paper (PDF) suggests that the conclusions have run well ahead of the numbers. Recent papers have been reporting impressive correlations between the activation of particular brain regions and associated behaviors and traits. But when you look at the reproducibility of the behavioral measurements themselves, the correlation is 0.8 at best. And the reproducibility of the blood-oxygen fMRI measurements is about 0.7. The highest possible correlation you could expect from those two is the square root of their product, or 0.74. Problem is. . .a number of papers, including ones that get the big press, show correlations much higher than that. Which is impossible.

The Neurocritic blog has more details on this. What seems to have happened is that many researchers found signals in their patients that correlated with the behavior that they were studying, and then used that same set of data to compute the correlations between the subjects. I find, by watching people go by the in the street, that I can pick out a set of people who wear bright red jackets and have ugly haircuts. Herding them together and rating them on the redness of their attire and the heinousness of their hair, I find a notably strong correlation! Clearly, there is an underlying fashion deficiency that leads to both behaviors. Or people had their hair in their eyes when they bought their clothes. Further studies are indicated.

No, you can't do it like that. A selection error of that sort could let you relate anything to anything. The authors of the paper (Edward Vul and Nancy Kanwisher of MIT) have done the field a great favor by pointing this out. You can read how the field is taking the advice here.

Comments (13) + TrackBacks (0) | Category: Biological News | Clinical Trials | The Central Nervous System

January 11, 2010

MAGL: A New Cancer Target

Email This Entry

Posted by Derek

I do enjoy some good chemical biology, and the latest Cell has another good example from the Cravatt group at Scripps (working with a team at Brigham and Women's Hospital over here on this coast). What they've done is profile various types of tumor cells using an activity-based probe to search for changes in serine hydrolase enzymes. Those are a large and diverse class (with quite a few known drug targets in them already), and there had already been reports that activity in this area was altered as cancer cell lines became more aggressive.

What they tracked down was an enzyme called MAGL (monoacylglyceride lipase). That's an interesting finding. Cancer cells have long been known to have different ideas about lipid handling, and several enzymes in that metabolic area have been proposed over the years as drug targets. (The first one I can think of is fatty acid synthase (FAS), whose elevated presence has been correlated with poor outcome in several tumor types). In general, aggressive tumor cells seem to run with higher levels of free fatty acids, for reasons that aren't quite clear. Some of the downstream products are signaling molecule, and some of these lipids may just be needed for elevated levels of cell membrane synthesis.

But it looks from this paper as if MAGL could be the real lipid-handling target that oncology people have been looking for, though. The teams inhibited the enzyme with a known small molecule (well, relatively small), and also via RNA knockdown, and in both cases they were able to disrupt growth of tumor cell lines. The fiercer the cells, the more they were affected, which tracked with the MAGL activity they had initially. On the other hand, inducing higher expression of MAGL in relatively tame tumor cells turned them aggressive and hardy. They have a number of lines of evidence in this paper, and they all point the same way.

One of those might be important for other reasons. The teams took the cell lines with impaired MAGL activity, and wondered if this could be rescued by providing them with the expected products that the enzyme would deliver. Stearic and palmitic acid are two of the fatty acids whose levels seem to be heavily regulated by MAGL, and sure enough, providing the MAGL-deficient cells with these restored their growth and mobility. As the paper points out specifically, this could have implications for a relationship between obesity and tumorigenesis. (I'd add a recommendation to look with suspicion at other conditions that lead to higher-than-usual levels of circulating free fatty acids, such as type II diabetes, or even fasting).

It may be that I particularly enjoyed this paper because I have a lipase-inhibiting past. As anyone who's run my name through SciFinder or Google Scholar has noticed, I helped lead a team some years ago that developed a series of inhibitors for hormone-sensitive lipase, a potential diabetes target. We were scuppered, though, by the fact that this enzyme does (at least) two different things in two totally different kinds of tissue. Out in fat and muscle, it helps hydrolyze glycerides (in fact, it's right in the same metabolic line as MAGL), and that's the activity we were targeting. But in steroidogenic tissues, it's known as neutral cholesteryl ester hydrolase, and it breaks those down to provide cholesterol for steroid biosynthesis. Unfortunately, when you inhibit HSL, you also do nasty things to the adrenals and a few other tissues. There's no market for a drug that gives you Addison's disease, I can tell you.

So I wondered when I saw this paper if MAGL has a dual life as well. If I'd ever worked in analgesia or cannabinoid receptor pharmacology, though, I'd have already known the answer. MAGL also regulates the levels of several compounds that signal through the endocannabinoid pathway, and has been looked at as a target in those areas. None of this seems to have an affect on the oncology side of things, though - this latest paper also looked at CB receptor effects on their cell lines that were deficient in MAGL, and found no connection there.

So, what we have from this paper is a very interesting cancer target (whose crystal structure was recently reported, to boot), a new appreciation of lipid handling in tumors, and a possible rationale for the connections seen between lipid levels and cancer in general. Not bad!

Special bonus: thanks to Cell's video abstracts, you can hear Ben Cravatt and his co-worker Dan Nomura explain their paper on YouTube. The journal has recently enhanced the way their papers are presented online, actually, and I plan to do a whole separate blog entry on that (and on video abstracts and the like).

Comments (9) + TrackBacks (0) | Category: Biological News | Cancer | Diabetes and Obesity

January 7, 2010

Is XMRV the Cause of Chronic Fatigue Syndrome? Or Anything?

Email This Entry

Posted by Derek

Last fall it was reported that a large proportion of patients suffering from chronic fatigue syndrome also showed positive for a little-understood retrovirus (XMRV). This created a lot of understandable excitement for sufferers of a conditions that (although often ill-defined) seems to have some puzzling biology buried in it somewhere.

Well, let the fighting begin: a new paper in PLoS One has challenged this correlation. Groups from Imperial College and King's College have failed to detect any XMRV in a similar patient population:

. . .Unlike the study of Lombardi et al., we have failed to detect XMRV or closely related MRV proviral DNA sequences in any sample from CFS cases. . .Based on our molecular data, we do not share the conviction that XMRV may be a contributory factor in the pathogenesis of CFS, at least in the U.K.

Interestingly, XMRV has also been reported in tissue from prostate cancer patients, but recent studies in Germany and Ireland failed to replicate these results. Could we be looking at a geographic coincidence, a retroviral infection that's found in North America but not in Europe, and one whose connection with these diseases is either complex or nonexistent?

Note: as per a comment on this post, the Whittemore Peterson Institute is firing back, claiming that their original work is valid and that the London study has many significant differences. PDF of their release here.

Comments (94) + TrackBacks (0) | Category: Biological News | Cancer | Infectious Diseases

Extortion, Retractions, And More

Email This Entry

Posted by Derek

Now here's a strange tale, courtesy of Science magazine, about some retracted work from Peter Schultz's group at Scripps. Two papers from 2004 detailed how to incorporate glycoslylated amino acids (glucosamine-serine and galactosamine-threonine) directly into proteins. These featured a lot of work from postdoc Zhiwen Zhang (who later was hired by the University of Texas for a faculty position).

But another postdoc was later having trouble reproducing the work, and in 2006 he made his case for why he thought it was incorrect. Following that:

Schultz says the concerns raised were serious enough that he asked a group of lab members to try to replicate the work in Zhang's Science paper in addition to several other important discoveries Zhang had made. That task, however, was complicated by the fact that Zhang's lab notebooks, describing his experiments in detail, were missing. Schultz says that in the early fall of 2006, the notebooks were in Schultz's office. But at some point after that they were taken without his knowledge and have never resurfaced.

After considerable effort, Schultz says his students were able to replicate most of the work. The biggest exception was the work that served as the basis for the 2004 Science and JACS papers. "It was clear the glycosylated amino acid work could not be reproduced as reported. So we tried to figure out what was going on," Schultz says.

So far, so not-so-good. But here's where things get odd. Around this time (early 2007), Zhang started to get e-mails at Texas saying that unless he send $4000 to an address in San Diego, the writer would expose his "fraud" and cause him to get fired. The messages were signed "Michael Pemulis" - Science doesn't pick up on that pen name, but fans of the late David Foster Wallace will recognize the name of the revengeful practical joker from Infinite Jest.

That brings up another point: the e-mails quoted in the Science article are in somewhat broken English: "you lose job. ... Texas will fire you before you tenure. . ." and that sort of thing. But my belief is that no one who drops the second person possessive while writing would make it far enough into Infinite Jest to meet Micheal Pemulis and use him as an appropriate alias for an extortion plot.

At any rate, after the San Diego police got involved, they told Zhang that they had a suspect, but Zhang decided not to press charges. That fall, though, "Pemulis" dropped the bomb, with a hostile anonymous letter to everyone involved - officials at Scripps and UT-Austin, the editors at Science, etc. In 2009, Zhang was denied tenure. The postdoc mentioned above (now at Cardiff) has published a paper in JBC detailing the problems with the original work. (He denies having anything to do with the missing lab notebooks or the threats made to Zhang). And everyone involved is still wondering just what is going on. . .

I certainly have no idea. But I can say this: although I've spent a lot more time in industry than in academia, a disproportionate number of the people I've worked with over the years that I consider to have had serious mental problems are still from my academic years. Whoever "Pemulis" is, I'd put him or her into that category. Grad students and post-docs are under a lot of pressure, and some of them are at a point in their lives when their internal problems are starting to seriously affect them.

Comments (49) + TrackBacks (0) | Category: Biological News | The Dark Side | The Scientific Literature

January 6, 2010

Five Technologies For the Scrap Heap?

Email This Entry

Posted by Derek

Xconomy has a piece on biotechnologies that look to be headed for obsolescence. I think the list is mostly correct - it includes the raw proteomic approach to understanding disease states and a lot of the biomarker work being done currently. I won't spoil the rest of the list; take a look and see what you think. Note: RNA interference is not on it, in case you're wondering. Nor are stem cells.

Comments (10) + TrackBacks (0) | Category: Biological News

January 5, 2010

Run It Past the Chemists

Email This Entry

Posted by Derek

I missed this paper when it came out back in October: "Reactome Array: Forging a Link Between Metabolome and Genome". I'd like to imagine that it was the ome-heavy title itself that drove me away, but I have to admit that I would have looked it over had I noticed it.

And I probably should have, because the paper has been under steady fire since it came out. It describes a method to metabolically profile a variety of cells though the use of a novel nanoparticle assay. The authors claim to have immobilized 1675 different biomolecules (representing common metabolites and intermediates) in such a way that enzymes recognizing any of them will set off a fluorescent dye signal. It's an ingenious and tricky method - in fact, so tricky that doubts set in quickly about the feasibility of doing it on 1675 widely varying molecular species.
Reactome%20slide.jpg
And the chemistry shown in the paper's main scheme looks wonky, too, which is what I wish I'd noticed. Take a look - does it make sense to describe a positively charged nitrogen as a "weakly amine region", whatever that is? Have you ever seen a quaternary aminal quite like that one before? Does that cleavage look as if it would work? What happens to the indane component, anyway? Says the Science magazine blog:

In private chats and online postings, chemists began expressing skepticism about the reactome array as soon as the article describing it was published, noting several significant errors in the initial figure depicting its creation. Some also questioned how a relatively unknown group could have synthesized so many complex compounds. The dismay grew when supplementary online material providing further information on the synthesized compounds wasn’t available as soon as promised. “We failed to put it in on time. The data is quite voluminous,” says co-corresponding author Peter Golyshin of Bangor University in Wales, a microbiologist whose team provided bacterial samples analyzed by Ferrer’s lab.

Science is also coming under fire. “It was stunning no reviewer caught [the errors],” says Kiessling. Ferrer says the paper’s peer reviewers did not raise major questions about the chemical synthesis methods described; the journal’s executive editor, Monica Bradford, acknowledged that none of the paper’s primary reviewers was a synthetic organic chemist. “We do not have evidence of fraud or fabrication. We do have concerns about the inconsistencies and have asked the authors' institutions to try to sort all of this out by examining the original data and lab notes,” she says.

The magazine published an "expression of concern" before the Christmas break, saying that in response to questions the authors had provided synthetic details that "differ substantially" from the ones in the original manuscript. An investigation is underway, and I'll be very interested to see what comes of it.

Comments (46) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Assays | The Scientific Literature

December 9, 2009

Water and Proteins Inside Cells: Sloshing Around, Or Not?

Email This Entry

Posted by Derek

Back in September, talking about the insides of cells, I said:

There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like. . ."

But is that right? I was reading this new paper in JACS, where a group at UNC is looking at the NMR of fluorine-labeled proteins inside E. coli bacteria. (It's pretty interesting, not least because they found that they can't reproduce some earlier work in the field, for reasons that seem to have them throwing their hands up in the air). But one reference caught my eye - this paper from PNAS last year, from researchers in Sweden.

That wasn't one that I'd read when it came out - the title may have caught my eye, but the text rapidly gets too physics-laden for me to follow very well. The UNC folks appear to have waded through it, though, and picked up some key insights which otherwise I'd have missed. The PNAS paper is a painstaking NMR analysis of the states of water molecules inside bacterial cells. They looked at both good ol' E. coli and at an extreme halophile species, figuring that that one might handle its water differently.

But in both cases, they found that about 85% of the water molecules had rotational states similar to bulk water. That surprises me (as you'd figure, given the views I expressed above). I guess my question is "how similar?", but the answer seems to be "as similar as we can detect, and that's pretty good". It looks like all the water molecules past the first layer on the proteins are more or less indistinguishable from plain water by their method. (No difference between the two types of bacteria, by the way). And given that the concentration of proteins, carbohydrates, salts, etc. inside a cell is rather different than bulk water, I have to say I'm at a loss. I wonder how different the rotational states of water are (as measured by NMR relaxation times) for samples that are, say, 1M in sodium chloride, guanidine, or phosphate?

The other thing that struck me was the Swedish group's estimate of protein dynamics. They found that roughly half of the proteins in these cells were rotationally immobile, presumably bound up in membranes or in multi-protein assemblies. It's been clear for a long time that there has to be a lot of structural order in the way proteins are arranged inside a living cell, but that might be even more orderly than I'd been picturing. At any rate, I may have to adjust my thinking about what those environments look like. . .

Comments (8) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

November 5, 2009

What Exactly Does Resveratrol Do?

Email This Entry

Posted by Derek

Resveratrol's a mighty interesting compound. It seems to extend lifespan in yeast and various lower organisms, and has a wide range of effects in mice. Famously, GlaxoSmithKline has expensively bought out Sirtris, a company whose entire research program started with resveratrol and similar compound that modulate the SIRT1 pathway.

But does it really do that? The picture just got even more complicated. A group at Amgen has published a paper saying that when you look closely, resveratrol doesn't directly affect SIRT1 at all. Interestingly, this conclusion has been reached before (by a group at the University of Washington), and both teams conclude that the problem is the fluorescent peptide substrate commonly used in sirtuin assays. With the fluorescent group attached, everything looks fine - but when you go to the extra trouble of reading things out without the fluorescent tag, you find that resveratrol doesn't seem to make SIRT1 do anything to what are supposed to be its natural substrates.

"The claim of resvertraol being a SIRT1 activator is likely to be an experimental artifact of the SIRT1 assay that employs the Fluor de Lys-SIRT1 peptide as a substrate. However, the beneficial metabolic effects of resveratrol have been clearly demonstrated in diabetic animal models. Our data do not support the notion that these metabolic effects are mediated by direct SIRT1 activation. Rather, they could be mediated by other mechanisms. . ."

They suggest activation of AMPK (an important regulatory kinase that's tied in with SIRT1) as one such mechanism, but admit that they have no idea how resveratrol might activate it. Does that process still require SIRT1 at all? Who knows? One thing I think I do know is that this has something to do with this Amgen paper from 2008 on new high-throughput assays for sirtuin enzymes.

One wonders what assay formats Sirtris has been using to evaluate their new compounds, and one also wonders what they make of all this now at GSK. Does one not? We can be sure, though, that there are plenty of important things that we don't know yet about sirtuins and the compounds that affect them. It's going to be quite a ride as we find them out, too.

Comments (35) + TrackBacks (0) | Category: Aging and Lifespan | Biological News | Drug Assays

October 28, 2009

Nanotech Armor

Email This Entry

Posted by Derek

Now here's a completely weird idea: a group in Korea has encapsulated individual living yeast cells in silica. They start out by coating the cells with some charged polymers that are known to serve as a good substrate for silication, and then expose the yeast to silicic acid solution. They end up with hard-shell yeast, sort of halfway to being a bizarre sort of diatom.
silica%20yeast.jpg
The encapsulated cells behave rather differently, as no doubt would we all under such conditions. After thirty days in the cold with no nutrients, the silica-coated yeast is at least three times more viable than wild-type cells (as determined by fluorescent staining). On the other hand, when exposed to a warm nutrient broth, the silica-coated yeast does not divide, as opposed to wild-type yeast, which of course takes off like a rocket under such conditions. They're still alive, but just sitting around - which makes you wonder what signals, exactly, are interrupting mitosis.
yeast%20cell.jpg
The authors tried the same trick on E. coli bacteria, but found that the initial polymer coating step killed them off. That's disappointing, but not surprising, given that disruption of the bacterial membrane with charged species is the mode of action of several broad-spectrum antibiotics.

"Hmmm. . .so what?" might be one reaction to this work. But stop and think about it for a minute. This provides a new means to an biological/inorganic interface, a way to stich cell biology and chemical nanotechnology together. If you can layer yeast cells with silica and they survive (and are, in fact, fairly robust), you can imagine gaining more control over the process and extending it to other substances. A layer that could at least partially conduct electricity would be very interesting, as would layers with various-sized pores built into them. The surfaces could be further functionalized with all sorts of other molecules as well for more elaborate experiments. No, this could keep a lot of people busy for a long time, and I suspect it will.

Comments (15) + TrackBacks (0) | Category: Biological News

October 16, 2009

Engineering Receptors: Not Quite There Yet. Not Exactly.

Email This Entry

Posted by Derek

There have been several reports over the years of people engineering receptor proteins to make them do defined tasks. They've generally been using the bacterial periplasmic binding proteins (PBPs) as a starting point, attaching some sort of fluorescent group onto one end, so that when a desired ligand binds, the protein folds in on itself in a way to set off a fluorescent resonance energy transfer (FRET). That's a commonly used technique to see if two proteins are in close proximity to each other; it's robust enough to be used in many high-throughput screening assays.

So the readout isn't the problem. But something else certainly is. In a new PNAS paper, a group at the Max Planck Institute in Tübingen has gone back and taken a look at these receptors, which are reported to bind a number of interesting ligands such as serotonin, lactate, and even TNT and a model for nerve gas agents. You can see the forensic applications for those latter two if the technique worked well, and the press releases were rather breathless, as they tend to be. But not only did these workers claim a very interesting sensor system, but they also went out of their way to emphasize that they arrived at these results computationally:

Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, l-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or l-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally.

The Max Planck group would like to disagree with that. Their PNAS paper is entitled "Computational Design of Ligand Binding is Not a Solved Problem". They were able to get crystals of the serotonin-binding protein, but could not get any X-ray structures that showed any serotonin binding in the putative ligand pocket. They then turned to a well-known suite of techniques to characterize ligand binding. One of these is thermal stability: when a protein is binding a high-affinity ligand, it tends to show a higher melting point, since its structure is often more settled-down than the open form. None of the reported receptors showed any such behavior, and all of them were substantially less thermally stable than the wild-type proteins. Strike one.

They then tried ITC, a calorimetry measurement to look for heat of binding. A favorable binding event releases heat - it's a lower-energy state - but none of the engineered receptors showed any changes at all when their supposed ligands were introduced. Strike two. And finally, they turned to NMR experiments, which are widely used to determine protein structure and characterize binding of small molecules. WIld-type proteins of this sort showed exactly what they should have: big conformational changes when their ligands were present. But the engineered proteins showed almost no changes at all. Strike three, and as far as I'm concerned, these pieces of evidence absolutely close the case. These so-called receptors aren't binding anything.

So why do they show FRET signals? The authors suggest that this is some sort of artifact, not related to real receptor binding and note dryly that "Our analysis shows the importance of experimental and structural validation to improve computational design methodologies".

I should also note a very interesting sidelight: the same original research group also published a paper in Science on turning these computationally engineered PBPs into a functional enzyme. Unfortunately, this was retracted last year, when it turned out that the work could not be reproduced. Some wild-type enzyme was still present as an impurity, and when the engineered protein was rigorously purified, the activity went away. (Update: more on this retraction here, and there is indeed more to it). It appears that some other results from this work may be going away now, too. . .

Comments (12) + TrackBacks (0) | Category: Biological News

October 7, 2009

A Nobel for Ribosome Structure

Email This Entry

Posted by Derek

This was another Biology-for-Chemistry year for the Nobel Committee. Venkatraman Ramakrishnan (Cambridge), Thomas Steitz (Yale) and Ada Yonath (Weizmann Inst.) have won for X-ray crystallographic studies of the ribosome.

Ribosomes are indeed significant, to put it lightly. For those outside the field, these are the complex machines that ratchet along a strand of messenger RNA, reading off its three-letter codons, matching these with the appropriate transfer RNA that's bringing in an amino acid, then attaching that amino acid to the growing protein chain that emerges from the other side. This is where the cell biology rubber hits the road, where the process moves from nucleic acids (DNA going to RNA) and into the world of proteins, the fundamental working units of a day-to-day living cell.

The ribosome has a lot of work to do, and it does it spectacularly quickly and well. It's been obvious for decades that there was a lot of finely balanced stuff going on there. Some of the three-letter codons (and some of the tRNAs) look very much like some of the others, so the accuracy of the whole process is very impressive. If more proofs were needed, it turned out that several antibiotics worked by disrupting the process in bacteria, which showed that a relatively small molecule could throw a wrench into this much larger machinery.

Ribosomes are made out of smaller subunits. A huge amount of work in the earlier days of molecular biology showed that the smaller subunit (known as 30S for how it spun down in a centrifuge tube) seemed to be involved in reading the mRNA, and the larger subunit (50S) was where the protein synthesis was taking place. Most of this work was done on bacterial ribosomes, which are relatively easy to get ahold of. They work in the same fashion as those in higher organisms, but have enough key differences to make them of interest by themselves (see below).

During the 1980s and early 1990s, Yonath and her collaborators turned out the first X-ray structures of any of the ribosomal subunits. Fuzzy and primitive by today's standards, those first data sets got better year by year, thanks in part to techniques that her group worked out first. (The use of CCD detectors for X-ray crystallography, a technology that was behind part of Tuesday's Nobel in Physics, was another big help, as was the development of much brighter and more focused X-ray sources). Later in the 1990s, Steitz and Ramakrishnan both led teams that produced much higher-resolution structures of various ribosomal subunits, and solved what's known as the "phase problem" for these. That's a key to really reconstructing the structure of a complex molecule from X-ray data, and it is very much nontrivial as you start heading into territory like this. (If you want more on the phase problem, here's a thorough and comprehensive teaching site on X-ray crystallography from Cambridge itself).
Ribosomal%20structures.jpg
By the early 2000s, all three groups were turning out ever-sharper X-ray structures of different ribosomal subunits from various organisms. The illustration above, courtesy of the Nobel folks, shows the 50S subunit at 9-angstrom (1998), 5-angstrom (1999) and 2.4-angstrom (2000) resolution, and shows you how quickly this field was advancing. Ramakrishnan's group teased out many of the fine details of codon recognition, and showed how some antibiotics known to cause the ribosome to start bungling the process were able to to work. It turned out that the opening and closing behavior of the 30S piece was a key for this whole process, with error-inducing antibiotics causing it to go out of synch. And here's a place where the differences between bacterial ribosomes and eukaryotic ones really show up. The same antibiotics can't quite bind to mammalian ribosomes, fortunately. Having the protein synthesis machinery jerkily crank out garbled products is just what you'd wish for the bacteria that are infecting you, but isn't something that you'd want happening in your own cells.

At the same time, Steitz's group was turning out better and better structures of the 50S subunit, and helping to explain how it worked. One surprise was that there was a highly ordered set of water molecules and hydrogen bonds involved - in fact, protein synthesis seems to be driven (energetically) almost entirely by changes in entropy, rather than enthalpy. Both his group and Ramakrishnan's have been actively turning out structures of the ribosome subunits in complex with various proteins that are known to be key parts of the process, and those mechanisms of action are still being unraveled as we speak.

The Nobel citation makes reference to the implications of all this for drug design. I'm of two minds on that. It's certainly true that many important antibiotics work at the ribosomal level, and understanding how they do that has been a major advance. But we're not quite to the point where we can design new drugs to slide right in there and do what we want. I personally don't think we're really at that stage with most drug targets of any type, and trying to do it against structures with a lot of nucleic acid character is particularly hard. The computational methods for those are at an earlier stage than the ones we have for proteins.

One other note: every time a Nobel is awarded, the thoughts go to the people who worked in the same area, but missed out on the citation. The three-recipients-max stipulation makes this a perpetual problem. This is outside my area of specialization, but if I had to list some people that just missed out here, I'd have to cite Harry Noller of UC-Santa Cruz and Marina Rodnina of Göttingen. Update: add Peter Moore of Yale as well. All of them work in this exact same area, and have made many real contributions to it - and I'm sure that there are others who could go on this list as well.

One last note: five Chemistry awards out of the last seven, by my count, have gone to fundamental discoveries in cell or protein biology. That's probably a reasonable reflection of the real world, but it does rather cut down on the number of chemists who can expect to have their accomplishments recognized. The arguing about this issue is not be expected to cease any time soon.

Comments (47) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Current Events | Infectious Diseases

October 5, 2009

A Nobel for Telomerase

Email This Entry

Posted by Derek

As many had expected, a Nobel Prize has been awarded to Elizabeth Blackburn (of UCSF), Carol Greider (of Johns Hopkins), and Jack Szostak (of Harvard Medical School/Howard Hughes Inst.) for their work on telomerase. Blackburn had been studying telomeres since her postdoc days in the late 1970s, and she and Szostak worked together in the field in the early 1980s, collarborating from two different angles. Greider (then a graduate student in Blackburn's lab) discovered the telomerase enzyme in 1984. She's continued to work in the area, as well she might, since it's been an extremely interesting and important one.

Telomeres, as many readers will know, are repeating DNA stretches found on the end of chromosomes. It was realized in the 1970s that something of this kind needed to be there, since otherwise replication of the chromosomes would inevitably clip off a bit from the end each time (the enzymes involved can't go all the way to the ends of the strands). Telomeres are the disposable buffer regions, which distinguish the natural end of a chromosome from a plain double-stranded DNA break.

What became apparent, though was that the telomerase complex often didn't quite compensate for telomere shortening. This provides a mechanism for limiting the number of cell divisions - when the telomeres get below a certain length, further replication is shut down. Telomerase activity is higher in stem cells and a few other specialized lines. This means that the whole area must be a key part of both cellular aging and the biology of cancer. In a later post, I'll talk about telomerase as a drug target, a tricky endeavour that straddles both of those topics.

It's no wonder that this work has attracted the amount of attention it has, and it's no wonder either that it's the subject of a well deserved Nobel. Congratulations to the recipients!

Comments (20) + TrackBacks (0) | Category: Aging and Lifespan | Biological News | Cancer | Current Events

September 11, 2009

Antioxidants and Cancer: Backwards?

Email This Entry

Posted by Derek

Readers may remember a study from earlier this year that suggested that taking antioxidants canceled out some of the benefits of exercise. It seems that the reactive oxygen species themselves, which everyone's been assuming have to be fought, are actually being used to signal the body's metabolic changes.

Now there's another disturbing paper on a possible unintended effect of antioxidant therapy. Joan Brugge and her group at Harvard published last month on what happens to cells when they're detached from their normal environment. What's supposed to happen, everyone thought, is apoptosis, programmed cell death. Apoptosis, in fact, is supposed to be triggered most of the time when a cell detects that something has gone seriously wrong with its normal processes, and being detached from its normal signaling environment (and its normal blood supply) definitely qualifies. But cancer cells manage to dodge that difficulty, and since it's known that they also get around other apoptosis signals, it made sense that this was happening here, too.

But there have been some recent reports that cast doubt on apoptosis being the only route for detached cell death. This latest study confirms that, but goes on to a surprise. When this team blocked apoptotic processes, detached cells died anyway. A closer look suggested that the reason was, basically, starvation. The cells were deprived of nutrients after being dislocated, ran out of glucose, and that was that. This process could be stopped, though, if a known oncogene involved in glucose uptake (ERBB2) was activated, which suggests that one way a cancer cells survive their travels is by keeping their fuel supply going.

So far, so good - this all fits in well with what we already know about tumor cells. But this study found that there was another way to keep detached cells from dying: give them antioxidants. (They used either N-acetylcysteine or a water-soluble Vitamin E derivative). It appears that oxidative stress is one thing that's helping to kill off wandering cells. On top of this effect, reactive oxygen species also seem to be inhibiting another possible energy source, fatty acid oxidation. Take away the reactive oxygen species, and the cells are suddenly under less pressure and have access to a new food source. (Here's a commentary in Nature that goes over all this in more detail, and here's one from The Scientist).

They went on to use some good fluorescence microscopy techniques to show that these differences in reactive oxygen species are found in tumor cell cultures. There are notable metabolic differences between the outer cells of a cultured tumor growth and its inner cells (the ones that can't get so much glucose), but that difference can be smoothed out by. . .antioxidants. The normal process is for the central cells in such growths to eventually die off (luminal clearance), but antioxidant treatment kept this from happening. Even more alarmingly, they showed that tumor cells expressing various oncogenes colonized an in vitro cell growth matrix much more effectively in the presence of antioxidants as well.

This looks like a very strong paper to me; there's a lot of work in it and a lot of information. Taken together, these results suggest a number of immediate questions. Is there something that shuts down normal glucose uptake when a cell is detached, and is this another general cell-suicide mechanism? How exactly does oxidative stress keep these cells from using their fatty acid oxidation pathway? (And how does that relate to normally positioned cells, in which fatty acid oxidation is actually supposed to kick in when glucose supplies go down?)

The biggest questions, though, are the most immediate: first, does it make any sense at all to give antioxidants to cancer patients? Right now, I'd very much have to wonder. And second, could taking antioxidants actually have a long-term cancer-promoting effect under normal conditions? I'd very much like to know that one, and so would a lot of other people.

After this and that exercise study, I'm honestly starting to think that oxidative stress has been getting an undeserved bad press over the years. Have we had things totally turned around?

Comments (43) + TrackBacks (0) | Category: Biological News | Cancer

September 8, 2009

Right Where You Want Them

Email This Entry

Posted by Derek

Imagine a drug molecule, and imagine it's a really good one. That is, it's made it out of the gut just fine, out into the bloodstream, and it's even slipped in through the membrane of the targeted cells. Now what?

Well, "cells are gels", as Arthur Kornberg used to say, and he was right. There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like, and that's what any drug molecule has to be able to do as well. And there's no particular reason for most of them to go anywhere particular inside the cell, once they're inside. They just diffuse around until they hit their targets, to which they stick (which is something they'd better do).

What if things didn't work this way? What if you could micro-inject your drug right into a particular cell compartment, or have it target a particular cell structure, instead of having to mist it all over the place? We now have a good answer to that question, but how much good it's going to do us drug discoverers is another thing entirely.

I'm referring to this paper from JACS, from a group at the University of Tokyo. They're targeting the important signaling enzyme PI3K. That's downstream of a lot of things, and in this case they used the PDGFR receptor in the cells, and a phosphorylated peptide that's a known ligand. To make the peptide go where they wanted, though, they further engineered both the ligand and the cells. The cells got modified by expression of dihydrofolate reductase (DHFR) in their plasma membranes, and the peptide ligand was conjugated to trimethoprim (TMP). TMP has a very strong association with DHFR, so this system was being used as an artificial targeting method. (It's as if the cell had been built up with hook-bearing Velcro on the inside of its plasma membrane, and the PI3K ligand was attached to a strip of the fuzzy side). Then to see what was going on, they also attached a fluorescent ligand to the peptide ligand as well.

Of course, this ligand-TMP-fluorescent fusion beast wasn't the best candidate for getting into a cell on its own, so the team microinjected it. And the results were dramatic. Normally, stimulating the PDGFR receptor in these cells led to downstream signaling in less than one minute. In cells that didn't have the DHFR engineered into their membranes, the fluorescent ligand could be seen diffusing through the whole cytosol, and giving a very weak PDGFR response. But in the cells with the targeting system built in, the ligand immediately seemed to stick to the inside of the plasma membrane, as planned, and a very robust, quick response was seen.

The paper details a number of control experiments that I'm not going into here, and I invite the curious to read the whole thing. I'm convinced, though, that the authors are seeing what they hoped to see. In other words, ligands which aren't worth much when they have to diffuse around on their own can be real tigers when they're dragged directly to their site of action. It makes sense that this would be true, but it's nice to see it demonstrated for real. I'll quote the last paragraph of the paper, though, because that's where I have some misgivings:

In summary, we have demonstrated that it is feasible to rapidly and efficiently activate an endogenous signaling pathway by placing a synthetic ligand at a specific location within a cell. The strategy should be applicable to other endogenous proteins and pathways through the choice of appropriate ligand molecules. More significantly, this proof-of-principle study highlights the importance of controlling the subcellular locales of molecules in the design of new synthetic modulators of intracellular biological events. There might be a number of compounds (not only activators but also inhibitors) that have been dismissed but may acquire potent biological activities when they are endowed with subcellular-targeting functions. Our next challenge is to develop cell-permeable carriers capable of delivering cargo ligands to specifically defined regions or organelles inside cells.

Where they lost me was in pointing out how important this is in designing new compounds. The problem is, these are very artificial, highly engineered cells. Everything's been set up to make them do just what you want them to do. If you don't cause them to express boatloads of DHFR in their membrane, nothing works. So what lessons does this have for a drug discovery guy like me? I'm not targeting cells that have been striped with convenient Velco patches.

And even if I find something endogenous that I can use, I can't make molecules that have to be delivered through the cell membrane by microinjection. You can see from the last sentence, though, that the authors realize that part as well. But that "next challenge" they speak of is more than enough to keep them occupied for the rest of their working lives. These kinds of experiments are important - they teach us a lot about cell biology, and there's sure a lot more of that to be learned. But the cells won't give up their secrets without a fight.

Comments (9) + TrackBacks (0) | Category: Biological News

August 20, 2009

Still Semaphoring, Even From the Bottom of the Swimming Pool

Email This Entry

Posted by Derek

It's hard to think of a more important class of drug targets than the G-protein coupled receptors (GPCRS). And back about fifteen years ago, I thought I had a reasonable understanding of how they worked. I was quite wrong, even given the standards of knowledge at the time, but since then the GPCR world has become gradually crazier and crazier.

The classic way of thinking about these receptors is that they live up on the cell surface, with part of the protein on the outside and part on the inside. The inside face is associated with various G-proteins, and the outside face has a binding site for some sort of signaling molecule. If the right molecule shows up and slots in the correct way into this binding cavity, the transmembrane helices of the protein rearrange, sliding around to change the shape and binding properties down there at the G-protein interface. This sets off some intracellular messaging - often by affecting levels of the messenger molecule cyclic-AMP. Thus is a signal from outside the cell relayed through the membrane to the inside.

Pretty nearly makes sense, doesn't it? Well, take a look at this new report from PLoS Biology. The authors rigged up living cells with a built-in fluorescent sensor system to monitor cAMP, and then studied the behavior of the thyroid-stimulating-hormone (TSH) receptor. That's a perfectly reasonable protein-ligand GPCR, but it turns out that it does things that are not (to us) perfectly reasonable.

This paper shows that when a TSH molecule binds, that the receptor gets taken back down through the membrane into the cell. That's certainly a known process (internalization), and was thought to be a regulatory process, a standard method for taking a specific GPCR out of the signaling business. Some receptors seem to do this right after they're used, and of those, some of them later resurface and some are broken up. (Other types hang around for many cycles until they're somehow worn out). But the ones that internalize quickly still set off their intracellular message before they get pulled back down. That's their purpose in life.

TSH does that. But the weird part is that the authors saw the receptor internalize along with its G-protein partners, and then continue signaling from inside the cell. Not only that, this extra signaling behavior set off somewhat different responses as compared to the first "normal" burst, and seems to be a necessary part of the usual TSH signaling pathway. It's a very odd thought, if you're used to thinking about GPCRs - it's like finding out that your cell phone works when it's turned off.

Now this sort of behavior has been demonstrated for a different class of signaling proteins (the tyrosine kinase receptors). And even GPCRs have been found, over the last few years, to be capable of setting off a different signaling regime (the MAP kinase pathway) after they've been internalized. (That's one of the weird findings of recent years that I mentioned in the introductory paragraph, and we still don't know what to do with that one as far as drug discovery goes). But everyone agreed that at least the good ol' cyclic AMP pathway worked the way we thought it did, through signaling at the cell surface, and thank goodness there was something you could still count on in this world.

Hah. Now we're going to have to see how many other GPCRs show this kind of behavior, and under what circumstances, and why. It may well turn out to be different for different cells or for different signaling ligands, or only occur under certain conditions. And we'll have to see how this relates to the other strange things that are being unraveled about GPCR behavior - they way that they can dimerize, with themselves or even other receptors, out on the cell surface, and the way that some of them seem to work in an opposite-sign signaling regime (always on, until something turns them off). Do these things still signal from beneath the waves, too?

Oh, this will keep the receptor folks busy, as if they weren't already. And, as usual when something like this shows up, it should serve as a reminder to anyone who thinks that we understand even the well-worked-out parts of cell biology. Hah!

Comments (10) + TrackBacks (0) | Category: Biological News

August 18, 2009

Schematic Notation for Biology?

Email This Entry

Posted by Derek

I see that there's a serious effort underway to standardize biochemical diagrams. About time! As a chemist, I don't mind admitting that I've been confused by many of these things over the years. As the current task force points out, one reason for that is that there are too many processes that all get drawn the same way: with a curved arrow. Enzymatic cleavage? Allosteric regulation? Product inhibition? Nucleic acid splicing? Enzyme activation? A curvy arrow should do nicely. And if the same scheme includes several of those phenomena at once, then we'll just use more arrows, making sure, of course, that they're all exactly the same size and style.

The new proposal seems to be based on the ideas behind electrical circuit diagrams and flow-chart conventions, and will attempt to convey information through several means (box shapes, arrow styles, etc.) I hope it, or something like it, actually catches on, although it'll take me a while to get used to translating it. Actually, what will take a while is getting used to the idea that biological diagrams are supposed to be imparting information at all. I've been trained in the other direction for too long.

Comments (17) + TrackBacks (0) | Category: Biological News

August 11, 2009

Dealing With Hedgehog Screening Results

Email This Entry

Posted by Derek

I was looking over a paper in PNAS, where a group at Stanford describes finding several small molecules that inhibit Hedgehog signaling. That's a very interesting (and ferociously complex) area, and the more tools that are available to study it, the better.

But let me throw something out to those who have read (or will read) the paper. (Here's the PDF, which is open access). The researchers seem to have done a screen against about 125,000 compounds, and come up with four single-digit micromolar hits. Characterizing these against a list of downstream assays showed that each of these acts in a somewhat different manner on the Hedgehog pathway.

And that's fine - the original screen would have picked up a variety of mechanisms, and there certainly are a variety out there to be picked up. I can believe that a list of compounds would differentiate on closer inspection. What I keep looking for, though, is (first) a mention that these compounds were run through some sort of general screening panel for other enzyme and/or receptor activities. They did look for three different kinase activities that had been shown to interfere (and didn't see them), but I'd feel much better about using some new structures as probes if I'd run them through a big panel of secondary assays first.

Second, I've been looking for some indication that there might have been some structure-activity relationships observed. I assume that each of these compounds might well have been part of a series - so how did the related structures fare? Having a one-off compound doesn't negate the data, naturally, although it certainly does make it harder to build anything from the hit you've found. But SAR is another factor that I'd immediately look for after a screen, and it seems strange to me that I can't find any mention of it.

Have I missed these things, or are they just not there? If they aren't, is that a big deal, or not? Thoughts?

Comments (5) + TrackBacks (0) | Category: Biological News | Drug Assays

July 7, 2009

What's So Special About Ribose?

Email This Entry

Posted by Derek

While we're on the topic of hydrogen bonds and computations, there's a paper coming out in JACS that attempts to answer an old question. Why, exactly, does every living thing on earth use so much ribose? It's the absolute, unchanging carbohydrate backbone to all the RNA on Earth, and like the other things in this category (why L amino acids instead of D?), it's attracted a lot of speculation. If you subscribe to the RNA-first hypothesis of the origins of life, then the question becomes even more pressing.

A few years ago, it was found that ribose, all by itself, diffuses through membranes faster than the other pentose sugars. This results holds up for several kinds of lipid bilayers, suggesting that it's not some property of the membrane itself that's at work. So what about the ability of the sugar molecules to escape from water and into the lipid layers?

Well, they don't differ much in logP, that's for sure, as the original authors point out. This latest paper finds, though, by using molecular dynamic simulations that there is something odd about ribose. In nonpolar environments, its hydroxy groups form a chain of hydrogen-bond-like interactions, particularly notable when it's in the beta-pyranose form. These aren't a factor in aqueous solution, and the other pentoses don't seem to pick up as much stabilization under hydrophobic conditions, either.

So ribose is happier inside the lipid layer than the other sugars, and thus pays less of a price for leaving the aqueous environment, and (both in simulation and experimentally) diffuses across membranes ten times as quickly as its closely related carboyhydate kin. (Try saying that five times fast!) This, as both the original Salk paper and this latest one note, leads to an interesting speculation on why ribose was preferred in the origins of life: it got there firstest with the mostest. (That's a popular misquote of Nathan Bedford Forrest's doctrine of warfare, and if he's ever come up before in a discussion of ribose solvation, I'd like to hear about it).

Comments (9) + TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It

June 22, 2009

Genzyme's Virus Problems

Email This Entry

Posted by Derek

We organic chemists have it easy compared to the cell culture people. After all, our reactions aren't alive. If we cool them down, they slow down, and if we heat them up, they'll often pick up where they left off. They don't grow, they don't get infected, and they don't have to be fed.

Cells, though, are a major pain. You can't turn your back on 'em. Part of the problem is that there are, as yet, no cells that have evolved to grow in a dish or a culture bottle. Everything we do to them is artificial, and a lot of it what we ask cultured cells to do is clearly not playing to their strengths. Ask Genzyme: they use the workhorse CHO (Chinese Hamster Ovary) cells to produce their biologics, but they've been having variable yield problems over the past few months. Now it turns out that their production facilities are infected with Vesivirus 2117 - I'd never heard of that one, but it interferes with CHO growth, and that's bringing Genzyme's workflow to a halt. (No one's ever reported human infection with that one, just to make that clear).

I assume that the next step is a complete, painstaking cleanup and decontamination. That's going to affect supplies of Cerezyme (imiglucarase) and Frabazyme (agalsidase) late in the summer and into the fall, although it's not clear yet how long the outage will be. Any cell culture lab that's had to toss things due to mycoplasms or other nasties will sympathize, and shudder at the thought of cleaning things up on this scale.

Comments (21) + TrackBacks (0) | Category: Biological News | Drug Development

May 13, 2009

Exercise and Vitamins: Now, Wait A Minute. . .

Email This Entry

Posted by Derek

Now, this is an example of an idea being followed through to its logical conclusion. Here’s where we start: the good effects of exercise are well known, and seem to be beyond argument. Among these are marked improvements in insulin resistance (the hallmark of type II diabetes) and glucose uptake. In fact, exercise, combined with losing adipose weight, is absolutely the best therapy for mild cases of adult-onset diabetes, and can truly reverse the condition, an effect no other treatment can match.

So, what actually causes these exercise effects? There has to be a signal (or set of signals) down at the molecular level that tells your cells what’s happening, and initiates changes in their metabolism. One good candidate is the formation of reactive oxygen species (ROS) in the mitochondria. Exercise most certainly increases a person’s use of oxygen, and increases the work load on the mitochondria (since that’s where all the biochemical energy is coming from, anyway). Increased mitochondrial formation of ROS has been well documented, and they have a lot of physiological effects.

Of course, ROS are also implicated in many theories of aging and cellular damage, which is why cells have several systems to try to soak these things up. That’s exactly why people take antioxidants, vitamin C and vitamin E especially. So. . .what if you take those while you’re exercising?

A new paper in PNAS askes that exact question. About forty healthy young male volunteers took part in the study, which involved four weeks of identical exercise programs. Half of the volunteers were already in athletic training, and half weren’t. Both groups were then split again, and half of each cohort took 1000 mg/day of vitamin C and 400 IU/day vitamin E, while the other half took no antioxidants at all. So, we have the effects of exercise, plus and minus previous training, and plus and minus antioxidants.

And as it turns out, antioxidant supplements appear to cancel out many of the beneficial effects of exercise. Soaking up those transient bursts of reactive oxygen species keeps them from signaling. Looked at the other way, oxidative stress could be a key to preventing type II diabetes. Glucose uptake and insulin sensitivity aren't affected by exercise if you're taking supplementary amounts of vitamins C and E, and this effect is seen all the way down to molecular markers such as the PPAR coactivator proteins PGC1 alpha and beta. In fact, this paper seems to constitute strong evidence that ROS are the key mediators for the effects of exercise, and that this process is mediated through PGC1 and PPAR-gamma. (Note that PPAR-gamma is the target of the glitazone class of drugs for type II diabetes, although signaling in this area is notoriously complex).

Interestingly, exercise also increases the body's endogenous antioxidant systems - superoxide dismutase and so on. These are some of the gene targets of PPAR-gamma, suggesting that these are downstream effects. Taking antioxidant supplements kept these from going up, too. All these effects were slightly more pronounced in the group that hadn't been exercising before, but were still very strong across the board.

This confirms the suspicions raised by a paper from a group in Valencia last year, which showed that vitamin C supplementation seemed to decrease the development of endurance capacity during an exercise program. I think that there's enough evidence to go ahead and say it: exercise and antioxidants work against each other. The whole take-antioxidants-for-better-health idea, which has been taking some hits in recent years, has just taken another big one.

Comments (26) + TrackBacks (0) | Category: Aging and Lifespan | Biological News | Cardiovascular Disease | Diabetes and Obesity

May 1, 2009

Niacin, No Longer Red-Faced?

Email This Entry

Posted by Derek

One of Merck’s less wonderful recent experiences was the rejection of Cordaptive, which was an attempt to make a niacin combination for the cardiovascular market. Niacin would actually be a pretty good drug to improve lipid profiles if people could stand to take the doses needed. But many people experience a burning, itchy skin flush that’s enough to make them give up on the stuff. And that’s too bad, because it’s the best HDL-raising therapy on the market. It also lowers LDL, VLDL, free fatty acids, and tryglycerides, which is a pretty impressive spectrum. So it’s no wonder that Merck (and others) have tried to find some way to make it more tolerable.

A new paper suggests that everyone has perhaps been looking in the wrong place for that prize. A group at Duke has found that the lipid effects and the cutaneous flushing are mechanistically distinct, way back at the beginning of the process. There might be a new way to separate the two.

Niacin’s target seems to be the G-protein coupled receptor GPR109A – and, unfortunately, that seems to be involved in the flushing response, since both that and the lipid effects disappear if you knock out the receptor in a mouse model. The current model is that activation of the receptor produces the prostaglandin PGD2 (among other things), and that’s what does the skin flush, when it hits its own receptor later on. Merck’s approach to the side effect was the block the PGD2 receptor by adding an antagonist drug for it along with the niacin. But taking out the skin flush at that point means doing it at nearly the last possible step.

The Duke team has looked closely at the signaling of the GPR109A receptor and found that beta-arrestins are involved (they’ve specialized in this area over the last few years). The arrestins are proteins that modify receptor signaling through a variety of mechanisms, not all of which are well understood. Wew’ve known about signaling through the G-proteins for many years (witness the name of the whole class of receptors), but beta-arrestin-driven signaling is a sort of alternate universe. (GPCRs have been developing quite a few alternate universes – the field was never easy to understand, but it’s becoming absolutely baroque).

As it turns out, mice that are deficient in either beta-arrestin 1 or beta-arrestin 2 show the same lipid effects in response to niacin dosing as normal mice. But the mice lacking much of their beta-arrestin 1 protein show a really significant loss of the flushing response, suggesting that it’s mediated through that signaling pathway (as opposed to the “normal” G-protein one). And a known GPR109A ligand that doesn’t seem to cause so much skin flushing (MK-0354) fit the theory perfectly: it caused G-protein signaling, but didn’t bring in beta-arrestin 1.

So the evidence looks pretty good here. This all suggests that screening for compounds that hit the receptor but don’t activate the beta-arrestin pathway would take you right to the pharmacology you want. And I suspect that several labs are going to now put that idea to the test, since beta-arrestin assays are also being looked at in general. . .

Comments (9) + TrackBacks (0) | Category: Biological News | Cardiovascular Disease | Toxicology

April 29, 2009

No MAGIC Involved

Email This Entry

Posted by Derek

What a mess! Science has a retraction of a 2005 paper, which is always a nasty enough business, but in this case, the authors can’t agree on whether it should be retracted or not. And no one seems to be able to agree on whether the original results were real, and (even if they weren’t) whether the technique the paper describes works anyway. Well.

The original paper (free full text), from two Korean research groups, described a drug target discovery technique with the acronym MAGIC (MAGnetism-based Interaction Capture). It’s a fairly straightforward idea in principle: coat a magnetic nanoparticle with a molecule whose target(s) you’re trying to identify. Now take cell lines whose proteins have had various fluorescent tags put on them, and get the nanoparticles into them. If you then apply a strong magnetic field to the cells, the magnetic particles will be pulled around, and they’ll drag along whichever proteins have associated with your bait molecule. Watch the process under a microscope, and see which fluorescent spots move in which cells.

Papers were published (in both Science and Nature Chemical Biology), patent applications were filed (well, not in that order!), startup money was raised for a company to be called CGK. . .and then troubles began. Word was that the technique wasn’t reproducible. One of the authors (Yong-Weon Yi) asked that his name be removed from the publications, which was rather problematic of him, considering that he was also an inventor on the patent application. Early last year, investigations by the Korean Advanced Institute of Science and Technology came to the disturbing conclusion that the papers “do not contain any scientific truth”, and the journals flagged them.

The Nature Chemical Biology paper was retracted last July, but the Science paper has been a real rugby scrum, as the journal details here. The editorial staff seems to have been unable to reach one of the authors (Neoncheol Jung), and they still don’t know where he is. That’s disconcerting, since he’s still listed as the founding CEO of CGK. A complex legal struggle has erupted between the company and the KAIST about who has commercial rights to the technology, which surely isn’t being helped along by the fact that everyone is disagreeing about whether it works at all, or ever has. Science says that they’ve received parts of the KAIST report, which states that the authors couldn’t produce any notebooks or original data to support any of the experiments in the paper. This is Most Ungood, of course, and on top of that, two of the authors also appear to have stated that the key experiments (where they moved the fluorescent proteins around) were not carried out as the paper says. Meanwhile, everyone involved is now suing everyone else back in Korea for fraud, for defamation, and who knows. The target date for all this to be resolved is somewhere around the crack of doom.

Emerging from the fiery crater, CGK came up with another (very closely related) technique, which they published late last year in JACS. (If nothing else, everyone involved is certainly getting their work into an impressive list of journals. If only the papers wouldn’t keep sliding right back out. . .) That one has stood up so far, but it’s only April. I presume that the editorial staff at JACS asked for all kinds of data in support, but (as this whole affair shows) you can’t necessarily assume that everyone’s doing the job they’re supposed to do.

The new paper, most interestingly, does not reference the previous work at all, which I suppose makes sense on one level. But if you just came across it de novo, you wouldn't realize that people (at the same company!) had already been (supposedly) working on magnetic particle assays in living cells. Looking over this one and comparing it to the original Science paper, one of the biggest differences seems to be how the magnetic particles are made to expose themselves to the cytoplasm. The earlier work mentioned coating the particles with a fusogenic protein (TAT-HA2) that was claimed to help with this process; that step is nowhere to be found in the JACS work. Otherwise, the process looks pretty much identical to me.

Let’s come up for air, then, and ask how well useful these ideas could be, stipulating (deep breath) that they work. Clearly, there’s some utility here. But I have to wonder how useful this protocol will be for general target fishing expeditions. Fluorescent labeling of proteins is indeed one of the wonders of the world (and was the subject of a recent a well-deserved Nobel prize). But not all proteins can be labeled without disturbing their function – and if you don’t know what the protein’s up to in the first place, you’re never sure if you’ve done something to perturb it when you add the glowing parts. There are also a lot of proteins, of course, to put it mildly, and if you don’t have any idea of where to start looking for targets, you still have a major amount of work to do. The cleanest use I can think of for these experiments is verifying (or ruling out) hypotheses for individual proteins.

But that's if it works. And at this point, who knows? I'll be very interested to follow this story, and to see if anyone else picks up this technique and gets it to work. Who's brave enough?

Comments (9) + TrackBacks (0) | Category: Biological News | Drug Assays | The Dark Side | The Scientific Literature

April 17, 2009

Genes to Diseases: Hard Work, You Say?

Email This Entry

Posted by Derek

So I see that the headlines are that it’s proving difficult to relate gene sequences to specific diseases. (Here's the NEJM, free full-text). I can tell you that the reaction around the drug industry to this news is a weary roll of the eyes and a muttered “Ya don’t say. . .”

That’s because we put our money down early on the whole gene-to-disease paradigm, and in a big way. As I’ve written here before, there was a real frenzy in the industry back in the late 1990s as the genomics efforts started really revving up. Everyone had the fear that all the drug targets that ever were, or ever could be, were about to be discovered, annotated, patented – and licensed to the competition, who were out there fearless on the cutting edge, ready to leap into the future, while we (on the other hand) lounged around like dinosaurs looking sleepily at that big asteroidy thing up there in the sky.

No, that’s really how it felt. Every day brought another press release about another big genomics deal. The train (all the trains!) were loudly leaving the station. A lot of very expensive deals were cut, sometimes in great haste, but (as far as I can tell) they yielded next to nothing – at least in terms of drug candidates, or even real drug targets themselves.

So yeah, we’ve already had a very expensive lesson in how hard it is to associate specific gene sequences with specific diseases. The cases where you can draw a dark, clear line between the two increasingly look like exceptions. There are a lot of these (you can read about them
in these texts
), but they tend to affect small groups of people at a time. The biggest diseases (diabetes, cardiovascular in general, Alzheimer’s, most cancers) seem to be associated with a vast number of genetic factors, most of them fairly fuzzy, and hardly any of them strong enough on their own to make a big difference one way or another. Combine that with the nongenetic (or epigenetic) factors like nutrition, lifestyle, immune response, and so on, and you have a real brew.

On that point, I like E. O. Wilson’s metaphor for nature versus nurture. He likened a person’s genetic inheritance to a photographic negative. Depending on how it’s developed and printed, the resulting picture can turn out a lot of different ways – but there’s never going to be more than was in there to start with. (These days, I suppose that we’re going to have to hunt for another simile – Photoshop is perhaps a bit too powerful to let loose inside that one).

But I've been talking mostly about variations in proteins as set by their corresponding DNA sequences. The real headscratcher has been this:

One observation that has taken many observers by surprise is that most loci that have been discovered through genomewide association analysis do not map to amino acid changes in proteins. Indeed, many of the loci do not even map to recognizable protein open reading frames but rather may act in the RNA world by altering either transcriptional or translational efficiency. They are thus predicted to affect gene expression. Effects on expression may be quite varied and include temporal and spatial effects on gene expression that may be broadly characterized as those that alter transcript levels in a constitutive manner, those that modulate transcript expression in response to stimuli, and those that affect splicing.

That's really going to be a major effort to understand, because we clearly don't understand it very well now. RNA effects have been coming on for the last ten or fifteen years as a major factor in living systems that we really weren't aware of, and it would be foolish to think that the last fireworks have gone off.

Comments (27) + TrackBacks (0) | Category: Biological News | Drug Industry History

March 26, 2009

The Motions of a Protein

Email This Entry

Posted by Derek

So, people like me spend their time trying to make small molecules that will bind to some target protein. So what happens, anyway, when a small molecule binds to a target protein? Right, right, it interacts with some site on the thing, hydrogen bonds, hydrophobic interactions, all that – but what really happens?

That’s surprisingly hard to work out. The tools we have to look at such things are powerful, but they have limitations. X-ray crystal structures are great, but can lead you astray if you’re not careful. The biggest problem with them, though (in my opinion) is that you see this beautiful frozen picture of your drug candidate in the protein, and you start to think of the binding as. . .well, as this beautiful frozen picture. Which is the last thing it really is.

Proteins are dynamic, to a degree that many medicinal chemists have trouble keeping in mind. Looking at binding events in solution is more realistic than looking at them in the crystal, but it’s harder to do. There are various NMR methods (here's a recent review), some of which require specially labeled protein to work well, but they have to be interpreted in the context of NMR’s time scale limitations. “Normal” NMR experiments give you time-averaged spectra – if you want to see things happening quickly, or if you want to catch snapshots of the intermediate states along the way, you have a lot more work to do.

Here’s a recent paper that’s done some of that work. They’re looking at a well-known enzyme, dihydrofolate reductase (DHFR). It’s the target of methotrexate, a classic chemotherapy drug, and of the antibiotic trimethoprim. (As a side note, that points out the connections that sometimes exist between oncology and anti-infectives. DHFR produces tetrahydrofolate, which is necessary for a host of key biosynthetic pathways. Inhibiting it is espccially hard on cells that are spending a lot of their metabolic energy on dividing – such as tumor cells and invasive bacteria).

What they found was that both inhibitors do something similar, and it affects the whole conformational ensemble of the protein:

". . .residues lining the drugs retain their μs-ms switching, whereas distal loops stop switching altogether. Thus, as a whole, the inhibited protein is dynamically dysfunctional. Drug-bound DHFR appears to be on the brink of a global transition, but its restricted loops prevent the transition from occurring, leaving a “half-switching” enzyme. Changes in pico- to nanosecond (ps-ns) backbone amide and side-chain methyl dynamics indicate drug binding is “felt” throughout the protein.

There are implications, though, for apparently similar compounds having rather different effects out in the other loops:

. . .motion across a wide range of timescales can be regulated by the specific nature of ligands bound. Occupation of the active site by small ligands of different shapes and physical characteristics places differential stresses on the enzyme, resulting in differential thermal fluctuations that propagate through the structure. In this view, enzymes, through evolution, develop sensitivities to ligand properties from which mechanisms for organizing and building such fluctuations into useful work can arise. . .Because the affected loop structures are primarily not in contact with drug, it is reasonable to envision inhibitory small-molecule drugs that act by allosterically modulating dynamic motions."

There are plenty of references in the paper to other investigations of this kind, so if this is your sort of thing, you'll find plenty of material there. One thing to take home, though, is to remember that not only are proteins mobile beasts (with and without ligand bound to them), but that this mobility is quite different in each state. And keep in mind that the ligand-bound state can be quite odd compared to anything else the protein experiences otherwise. . .

Comments (3) + TrackBacks (0) | Category: Biological News | Cancer | Chemical News | In Silico

March 24, 2009

Grabbing Onto A Protein's Surface

Email This Entry

Posted by Derek

I’ve written here before about the "click" triazole chemistry that Barry Sharpless’s group has pioneered out at Scripps. This reaction has been finding a lot of uses over the last few years (try this category for a few, and look for the word "click"). One of the facets I find most interesting is the way that they’ve been able to use this Huisgen acetylene/azide cycloaddition reaction to form inhibitors of several enzymes in situ, just by combining suitable coupling partners in the presence of the protein. Normally you have to heat that reaction up quite a bit to get it to go, but when the two reactants are forced into proximity inside the protein, the rate speeds up enough to detect a product.

Note that I said “inside the protein”. My mental picture of these things has involved binding-site cavities where the compounds are pretty well tied down. But a new paper from Jim Heath’s group at Cal Tech, collaborating with Sharpless and his team, demonstrates something new. They’re now getting this reaction to work out on protein surfaces, and in the process making what are basically artificial antibody-type binding agents.

To start with, they prepared a large library of hexapeptides out of the unnatural D-amino acids, in a one-bead-one-compound format. (Heath’s group has been working in this area for a while, and has experience dealing with these - see this PDF presentation for an overview of their research). Each peptide had an acetylene-containing amino acid at one end, for later use. They exposed these to a protein target: carbonic anhydrase II, the friend of every chemist who’s trying to make proteins do unusual things. The oligopeptide that showed the best binding to the protein’s surface was then incubated with the target CA II protein and another library of diverse hexapeptides. These had azide-containing amino acids at both ends, and the hope was that some of these would come close enough, in the presence of the protein, to react with the anchor acetylene peptide.

Startlingly, this actually worked. A few of the azide oligopeptides did do the click triazole-forming reaction. And the ones that worked all had related sequences, strongly suggesting that this was no fluke. What impresses me here is that (1) these things were lying on top of the protein, picking up what interactions they could, not buried inside a more restrictive binding site, and (2) the click reaction worked even though the binding constants of the two partners must not have been all the impressive. The original acetylene hexapeptide, in fact, bound at only 500 micromolar, and the other azide-containing hexapeptides that reacted with them were surely in the same ballpark.

The combined beast, though, (hexapeptide-triazole-hexapeptide) was a 3 micromolar compound. And then they took the thing through another round of the same process, decorating the end with a reactive acetylene and exposing it to the same azide oligopeptide library in the presence of the carbonic anhydrase target. The process worked again, generating a new three-oligopeptide structure which now showed 50 nanomolar binding. This increase in affinity over the whole process is impressive, but it’s just what you’d expect as you start combining pieces that have some affinity on their own. Importantly, when they made a library on beads by coupling the whole list of azide-containing hexapeptides with the biligand (through the now-standard copper-catalyzed reaction), the target CA II protein picked out the same sequences that were generated by the in situ experiment.

So what you have, in the end, is a short protein-like thing (actually three small peptides held together by triazole linkers) that has been specifically raised to bind a protein target – thus the comparison to antibodies above. What we don't know yet, of course, is just how this beast is binding to the carbonic anhydrase protein. It would appear to be stretched across some non-functional surface, though, because the triligand didn't seem to interfere with the enzyme's activity once it was bound. I'd be very interested in seeing if an X-ray structure could be generated for the triligand complex or any of the others. Heath's group is now apparently trying to generate such agents for other proteins and to develop assays based on them. I look forward to seeing how general the technique is.

This result makes a person wonder if the whole in situ triazole reaction could be used to generate inhibitors of protein-protein interactions. Doing that with small molecules is quite a bit different than doing it with hexapeptide chains, of course, but there may well be some hope. And there's another paper I need to talk about that bears on the topic; I'll bring that one up shortly. . .

Comments (7) + TrackBacks (0) | Category: Biological News | Chemical News

March 4, 2009

Gene Expression: You Haven't Been Thinking Big Enough?

Email This Entry

Posted by Derek

Well, here’s another crack at open-source science. Stephen Friend, the previous head of Rosetta (before and after being bought by Merck), is heading out on his own to form a venture in Seattle called Sage. The idea is to bring together genomic studies from all sorts of laboratories into a common format and database, with the expectation that interesting results will emerge that couldn’t be found from just one lab’s data.

I’ll be interested to see if this does yield something worthwhile – in fact, I’ll be interested to see if it gets off the ground at all. As I’ve discussed before, the analogy with open-source software doesn’t hold up so well with most scientific research these days, since the entry barriers (facilities, equipment, and money) are significantly higher than they are in coding. Look at genomics – the cost of sequencing has been dropping, for sure, but it’s still very expensive to get into the game. That lowered cost is measured per base sequenced – today’s technology means that you sequence more bases, which means that the absolute cost hasn’t come down as much as you might think. I’m sure you can get ten-year-old equipment cheap, but it won’t let you do the kind of experiments you might want to do, at least not in the time you’ll be expected to do them in.

But even past that issue, once you get down to the many labs that can do high-level genomics (or to the even larger number that can do less extensive sequencing), the problems will be many. Sage is also going to look at gene expression levels, something that's easier to do (although we're still not in weekend-garage territory yet). Some people would say that it's a bit too easy to do: there are a lot of different techniques in this field, not all of which always yield comparable data, to put it mildly. There have been several attempts to standardize things, along with calls for more control experiments, but getting all these numbers together into a useful form will still not be trivial.

Then you've got the really hard issues: intellectual property, for one. If you do discover something by comparing all these tissues from different disease states, who gets to profit from it? Someone will want to, that's for sure, and if Sage itself isn't getting a cut, how will they keep their operation going? Once past that question (which is a whopper), and past all the operational questions, there's an even bigger one: is this approach going to tell us anything we can use at all?

At first thought, you'd figure that it has to. Gene sequences and gene expression are indeed linked to disease states, and if we're ever going to have a complete understanding of human biology, we're going to have to know how. But. . .we're an awful long way from that. Look at the money that's been poured into biomarker development by the drug industry. A reasonable amount of that has gone into gene expression studies, trying to find clear signs and correlations with disease, and it's been rough sledding.

So you can look at this two ways: you can say fine, that means that the correlations may well be there, but they're going to be hard to find, so we're going to have to pool as much data as possible to do it. Thus Sage, and good luck to them. Or the systems may be so complex that useful correlations may not even be apparent at all, at least at our current level of understanding. I'm not sure which camp I fall into, but we'll have to keep making the effort in order to find out who's right.

Comments (16) + TrackBacks (0) | Category: Biological News | Drug Development

November 11, 2008

Wash Your Tubes; Mess Up Your Data

Email This Entry

Posted by Derek

I wrote a while back about the problem of compounds sticking to labware. That sort of thing happens more often than you’d think, and it can really hose up your assay data in ways that will send you running around in circles. Now there’s a report in Science of something that’s arguably even worse. (Here's a good report on it from Bloomberg, one of the few to appear in the popular press).

The authors were getting odd results in an assay with monoamine oxidase B enzyme, and tracked it down to two compounds leaching out of the disposable plasticware (pipette tips, assay plates, Eppendorf vials, and so on). Oleamide is used as a “slip agent” to keep the plastic units from sticking to each other, but it’s also a MAO-B inhibitor. Another problem was an ammonium salt called DiHEMDA, which is put in as a general biocide – and it appears to be another MAO-B inhibitor.

Neither of them are incredibly potent, but if you’re doing careful kinetic experiments or the like, it’s certainly enough to throw things off. The authors found that just rinsing water through various plastic vessels was enough to turn the solution into an enzyme inhibitor. Adding organic solvents (10% DMSO, methanol) made the problem much worse; presumably these extract more contaminants.

And it’s not just this one enzyme. They also saw effects on a radioligand binding assay to the GABA-A receptor, and they point out that the biocides used are known to show substantial protein and DNA binding. These things could be throwing assay data around all over the place – and as we work in smaller and smaller volumes, with more complex protocols, the chances of running into trouble increase.

What to do about all this? Well, at a minimum, people should be sure to run blank controls for all their assays. That’s good practice, but sometimes it gets skipped over. This effect has probably been noted many times before as some sort of background noise in such controls, and many times you should be able to just subtract it out. But there are still many experiments where you can’t get away from the problem so easily, and it’s going to make your error bars wider no matter what you do about it. There are glass inserts for 96-well plates, and there are different plastics from different manufacturers. But working your way through all that is no fun at all.

As an aside, this sort of thing might still make it into the newspapers, since there have been a lot of concerns about bisphenol A and other plastic contaminants. In this case, I think the problem is far greater for lab assays than it is for human exposures. I’m not so worried about things like oleamide, since these are found in the body anyway, and can easily be metabolized. The biocides might be a different case, but I assume that we’re loaded with all kinds of substances, almost all of them endogenous, that are better inhibitors of enzymes like MAO-B. And at any rate, we’re exposed to all kinds of wild stuff at low levels, just from the natural components of our diet. Our livers are there to deal with just that sort of thing, but that said, it’s always worth checking to make sure that they’re up to the job.

Comments (10) + TrackBacks (0) | Category: Biological News | Drug Assays

November 7, 2008

System Biology: Ready, or Not?

Email This Entry

Posted by Derek

Systems biology – depending on your orientation, this may be a term that you haven’t heard yet, or one from the cutting edge of research, or something that’s already making you roll your eyes at its unfulfilled promise. There’s a good spread of possible reactions.

Broadly, I’d say that the field is concerned with trying to model the interactions of whole biological systems, in an attempt to come up with come explanatory power. It’s the sort of thing that you could only imagine trying to do with modern biological and computational techniques, but whether these are up to the job is still an open question. This gets back to a common theme that I stress around here, that biochemical networks are hideously, inhumanly complex. There’s really no everyday analogy that works to describe what they’re like, and if you think you really understand them, then you’re in the same position as all those financial people who thought they understood their exposure to mortgage-backed security risks.

You’ll have this enzyme, you see, that phosphorylates another enzyme, which increases its activity. But that product of that second enzyme inhibits another enzyme that acts to activate the first one, and each of them also interacts with fourteen (or forty-three) others, some of which are only expressed under certain conditions that we don’t quite understand, or are localized in the cell in patterns that aren’t yet clear, and then someone discovers a completely new enzyme in the middle of the pathway that makes hash out of what we thought we knew about

So my first test for listening to systems biology people is whether they approach things with the proper humility. There’s a good article in Nature on the state of the field, which does point out that some of the early big-deal-big-noise articles in the field alienated many potential supporters through just this effect. But work continues, and a lot of drug companies are putting money into it, under the inarguable “we need all the help we can get” heading.

One of the biggest investors has been Merck, a big part of that being their purchase a few years ago of Rosetta Inpharmatics. That group published an interesting paper earlier this year (also in Nature) on some of the genetic underpinnings of metabolic disease. A phrase from the article's abstract emphasizes the difficulties of doing this work: "Our analysis provides direct experimental support that complex traits such as obesity are emergent properties of molecular networks that are modulated by complex genetic loci and environmental factors." Yes, indeed.

But here’s a worrisome thing that didn’t make the article: Merck recently closed the Seattle base of the Rosetta team, in its latest round of restructuring and layoffs. One assumes that many of them are being transitioned to the Merck mothership, and that the company is still putting money into this approach, but there is room to wonder. Update: here's an article on this very subject). There is this quote from the recent overview:

Stephen Friend, Merck's vice-president for oncology, thinks that any hesitancy will be overcome when the modelling becomes so predictive that the toxicity and efficacy of a potential drug can be forecast very accurately even before an experimental animal is brought out if its cage. "The next three to five years will provide a couple such landmark predictions and wake everyone up," he says.

Well, we’ll see if he’s right about that timeframe, and I hope he is. I fear that the problem is one of those that appears large, and as you get closer to it, does nothing but get even larger. My opinion, for what it’s worth, is that it’s very likely too early to be able to come up with any big insights from the systems approach. But I can’t estimate the chances that I’m wrong about that, and the potential payoffs are large. For now, I think the best odds are in the smaller studies, narrowing down on single targets or signaling networks. That cuts down on the possibility that you’re going to find something revolutionary, but it increases the chance that anything you find is actually real. Talk of “virtual cells” and “virtual genomes” is, to my mind, way premature, and anyone who sells the technology in those terms should, I think, be regarded with caution.

But that said, any improvement is a big one. Our failure rates due to tox and efficacy problems are so horrendous that just taking some of these things down 10% (in real terms) would be a startling breakthrough. And we’re definitely not going to get this approach to work if we don’t plow money and effort into it; it’s not going to discover itself. So press on, systems people, and good luck. You’re going to need it; we all do.

Comments (33) + TrackBacks (0) | Category: Biological News

October 31, 2008

Fructose In The Brain?

Email This Entry

Posted by Derek

Let’s talk sugar, and how you know if you’ve eaten enough of it. Just in time for Halloween! This is a field I’ve done drug discovery for in the past, and it’s a tricky business. But some of the signals are being worked out.

Blood glucose, as the usual circulating energy source in the body, is a good measure of whether you’ve eaten recently. If you skip a meal (or two), your body will start mobilizing fatty acids from your stored supplies, and circulate them for food. But there’s one organ that runs almost entirely on sugar, no matter what the conditions: the brain. Even if you’re fasting, your liver will make sugar from scratch for your brain to use.

And as you’d expect, brain glucose levels are one mechanism the body uses to decide whether to keep eating or not. A cascade of enzyme signals has been worked out over the years, and the current consensus seems to be that high glucose in the brain inactivates AMP kinase (AMPK). (That’s a key enzyme for monitoring the energy balance in the brain – it senses differences in concentration between ATP, the energy currency inside every cell, and its product and precursor, AMP). Losing that AMPK enzyme activity then removes the brakes on the activity of another enzyme, acetyl CoA-carboxylase (ACC). (That one’s a key regulator of fatty acid synthesis – all this stuff is hooked together wonderfully). ACC produces malonyl-CoA, and that seems to be a signal to the hypothalamus of the brain that you’re full (several signaling proteins are released at that point to spread the news).

You can observe this sort of thing in lab rats – if you infuse extra glucose into their brains, they stop eating, even under conditions when they otherwise would keep going. A few years ago, an odd result was found when this experiment was tried with fructose: instead of lowering food intake, infusing fructose into the central nervous system made the animals actually eat more. That’s not what you’d expect, since in the end, fructose ends up metabolized to the same thing as glucose does (pyruvate), and used to make ATP. So why the difference in feeding signals?

A paper in PNAS (open access PDF) from a team at Johns Hopkins and Ibaraki University in Japan now has a possible explanation. Glucose metabolism is very tightly regulated, as you’d expect for the main fuel source of virtually every living cell. But fructose is a different matter. It bypasses the rate-limiting step of the glucose pathway, and is metabolized much more quickly than glucose is. It appears that this fast (and comparatively unregulated) process actually uses up ATP in the hypothalamus – you’re basically revving up the enzyme machinery early in the pathway (ketohexokinase in particular) so much that you’re burning off the local ATP supply to run it.

Glucose, on the other hand, causes ATP levels in the brain to rise – which turns down AMPK, which turns up ACC, which allows malonyl-CoA to rise, and turns off appetite. But when ATP levels fall, AMPK is getting the message that energy supplies are low: eat, eat! Both the glucose and fructose effects on brain ATP can be seen at the ten-minute mark and are quite pronounced at twenty minutes. The paper went on to look at the activities of AMPK and ACC, the resulting levels of malonyl CoA, and everything was reversed for fructose (as opposed to glucose) right down the line. Even expression of the signaling peptides at the end of the process looks different.

The implications for human metabolism are clear: many have suspected that fructose could in fact be doing us some harm. (This New York Times piece from 2006 is a good look at the field: it's important to remember that this is very much an open question). But metabolic signaling could be altered by using fructose as an energy source over glucose. The large amount of high-fructose corn syrup produced and used in the US and other industrialized countries makes this an issue with very large political, economic, and public health implications.

This paper is compelling story – so, what are its weak points? Well, for one thing, you’d want to make sure that those fructose-metabolizing enzymes are indeed present in the key cells in the hypothalamus. And an even more important point is that fructose has to get into the brain. These studies were dropping it in directly through the skull, but that’s not how most people drink sodas. For this whole appetite-signaling hypothesis to work in the real world, fructose taken in orally would have to find its way to the hypothalamus. There’s some evidence that this is the case, but that fructose would have to find its way past the liver first.

On the other hand, it could be that this ATP-lowering effect could also be taking place in liver cells, and causing some sort of metabolic disruption there. AMPK and ACC are tremendously important enzymes, with a wide range of effects on metabolism, so there's a lot of room for things to happen. I should note, though, that activation of AMPK out in the peripheral tissues is thought to be beneficial for diabetics and others - this may be one route by which Glucophage (metformin) works. (Now some people are saying that there may be more than one ACC isoform out there, bypassing the AMPK signaling entirely, so this clearly is a tangled question).

I’m sure that a great deal of effort is now going into working out these things, so stay tuned. It's going to take a while to make sure, but if things continue along this path, there could be reasons for a large change in the industrialized human diet. There are a lot of downstream issues - how much fructose people actually consume, for one, and the problem of portion size and total caloric intake, no matter what form it's in, for another. So I'm not prepared to offer odds on a big change, but the implications are large enough to warrant a thorough check.

Update: so far, no one has been able to demonstrate endocrine or satiety differences in humans consuming high-fructose corn syrup vs. the equivalent amount of sucrose. See here, here, and here.

Comments (22) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | The Central Nervous System

October 9, 2008

More Glowing Cells: Chemistry Comes Through Again

Email This Entry

Posted by Derek

I’ve spoken before about the acetylene-azide “click” reaction popularized by Barry Sharpless and his co-workers out at Scripps. This has been taken up by the chemical biology field in a big way, and all sorts of ingenious applications are starting to emerge. The tight, specific ligation reaction that forms the triazole lets you modify biomolecules with minimal disruption (by hanging an azide or acetylene from them, both rather small groups), and tag them later on in a very controlled way.

Adrian Salic and co-worker Cindy Yao have just reported an impressive example. They’ve been looking at ethynyluracil (EU), the acetylene-modified form of the ubiquitous nucleotide found in RNA. If you feed this to living organisms, they take it up just as if it were uracil, and incorporate it into their RNA. (It’s uracil-like enough to not be taken up into DNA, as they’ve shown by control experiments). Exposing cells or tissue samples later on to a fluorescent-tagged azide (and the copper catalyst needed for quick triazole formation) lets you light up all the RNA in sight. You can choose the timing, the tissue, and your other parameters as you wish.

For example, Salic and Yao have exposed cultured cells to EU for varying lengths of time, and watched the time course of transcription. Even ten minutes of EU exposure is enough to see the nuclei start to light up, and a half hour clearly shows plenty of incoporation into RNA, with the cytoplasm starting to show as well. (The signal increases strongly over the first three hours or so, and then more slowly).

Isolating the RNA and looking at it with LC/MS lets you calibrate your fluorescence assays, and also check to see just how much EU is getting taken up. Overall, after a 24-hour exposure to the acetylene uracil, it looks like about one out of every 35 uracils in the total RNA content has been replaced with the label. There’s a bit less in the RNA species produced by the RNAPol1 enzyme as compared to the others, interestingly.

There are some other tricks you can run with this system. If you expose the cells for 3 hours, then wash the EU out of the medium and let them continue growing under normal conditions, you can watch the labeled RNA disappear as it turns over. As it turns out, most of it drops out of the nucleus during the first hour, while the cytoplasmic RNA seems to have a longer lifetime. If you expose the cells to EU for 24 hours, though, the nuclear fluorescence is still visible – barely – after 24 hours of washout, but the cytoplasmic RNA fluorescence never really goes away at all. There seems to be some stable RNA species out there – what exactly that is, we don’t know yet.

Finally, the authors tried this out on whole animals. Injecting a mouse with EU and harvesting organs five hours later gave some very interesting results. It worked wonderfully - whole tissue slices could be examined, as well as individual cells. Every organ they checked showed nuclear staining, at the very least. Some of the really transcriptionally active populations (hepatocytes, kidney tubules, and the crypt cells in the small intestine) were lit up very brightly indeed. Oddly, the most intense staining was in the spleen. What appear to be lymphocytes glowed powerfully, but other areas next to them were almost completely dark. The reason for this is unknown, and that’s very good news indeed.

That’s because when you come up with a new technique, you want it to tell you things that you didn’t know before. If it just does a better or more convenient job of telling you what you could have found out, that’s still OK, but it’s definitely second best. (And, naturally, if it just tells you what you already knew with the same amount of work, you’ve wasted your time). Clearly, this click-RNA method is telling us a lot of things that we don’t understand yet, and the variety of experiments that can be done with it has barely been sampled.

Closely related to this work is what’s going on in Carolyn Bertozzi’s lab in Berkeley. She’s gone a step further, getting rid of the copper catalyst for the triazole-forming reaction by ingeniously making strained, reactive acetylenes. They’ll spontaneously react if they see a nearby azide, but they’re still inert enough to be compatible with biomolecules. In a recent Science paper, her group reports feeded azide-substituted galactosamine to developing zebrafish. That amino sugar is well known to be used in the synthesis of glycoproteins, and the zebrafish embryos seemed to have no problem accepting the azide variant as a building block.

And they were able to run these same sorts of experiments – exposing the embryos to different concentrations of azido sugar, for different times, with different washout periods before labeling all gave a wealth of information about the development of mucin-type glycans. Using differently labled fluorescent acetylene reagents, they could stain different populations of glycan, and watch time courses and developmental trafficking – that’s the source of the spectacular images shown.

Bertozzi%2Cjpg.jpg

Losing the copper step is convenient, and also opens up possibilities for doing these reactions inside living cells (which is definitely something that Bertozzi’s lab is working on). The number of experiments you can imagine is staggering – here, I’ll do one off the top of my head to give you the idea. Azide-containing amino acids can be incorporated at specific places in bacterial proteins – here’s one where they replaced a phenylalanine in urate oxidase with para-azidophenylalanine. Can that be done in larger, more tractable cells? If so, why not try that on some proteins of interest – there are thousands of possibilities – then micro-inject one of the Bertozzi acetylene fluorescence reagents? Watching that diffuse through the cell, lighting things up as it found azide to react with would surely be of interest – wouldn’t it?

I’m writing about this the day after the green fluorescent protein Nobel for a reason, of course. This is a similar approach, but taken down to the size of individual molecules – you can’t label uracil with GFP and expect it to be taken up into RNA, that’s for sure. Advances in labeling and detection are one of the main things driving biology these days, and this will just accelerate things. (It’s also killing off a lot of traditional radioactive isotope labeling work, too, not that anyone’s going to miss it). For the foreseeable future, we’re going to be bombarded with more information than we know what to do with. It’ll be great – enjoy it!

Comments (7) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

October 8, 2008

A Green Fluorescent Nobel Prize

Email This Entry

Posted by Derek

So it was green fluorescent protein after all! We can argue about whether this was a pure chemistry prize or another quasi-biology one, but either way, the award is a strong one. So, what is the stuff and what’s it do?

Osamu Shimomura discovered the actual protein back in 1962, isolating it from the jellyfish Aequoria victoria. These were known to be luminescent creatures, but when the light-emitting protein was found (named aequorin), it turned out to give off blue light. That was strange, since the jellyfish were known for their green color. Shimomura then isolated another protein from the same jellyfish cells, which turned out to absorb the blue light from aequorin very efficiently and then fluoresce in the green: green fluorescent protein. The two proteins are a coupled system, an excellent example of a phenomenon known as FRET (fluorescence resonance energy transfer), which has been engineered into many other useful applications over the years.

Fluorescence is much more common in inorganic salts and small organic molecules, and at first it was a puzzle how a protein could emit light in the same way. As it turns out, there’s a three-amino-acid sequence right in the middle of its structure (serine-tyrosine-glycine) that condenses with itself when the protein is folded properly and makes a new fluorescent species. (The last step of the process is reaction with ambient oxygen). The protein has a very pronounced barrel shape to it, and lines up these key amino acids in just the orientation needed for the reaction to go at a reasonable rate (on a time scale of tens of minutes at room temperature). This is well worked out now, but it was definitely not obvious at the time.

In the late 1980s, for example, the gene for GFP was cloned by Doug Prasher, but he and his co-workers believed that they could well express a non-fluorescent protein that would need activation by some other system. He had the idea that this could be used as a tag for other proteins, but was never able to get to the point of demonstrating it, and will join the list of people who were on the trail of a Nobel discovery but never quite got there. Update: Here's what Prasher is doing now - this is a hard-luck story if I've ever heard one Prasher furnished some of the clone to Martin Chalfie at Columbia, who got it to express in E. coli and found that the bacteria indeed glowed bright green. (Other groups were trying the same thing, but the expression was a bit tricky at the time). The next step was to express it in the roundworm C. elegans (naturally enough, since Chalfie had worked with Sydney Brenner). Splicing it in behind a specific promoter caused the GFP to express in definite patterns in the worms, just as expected. This all suggested that the protein was fluorescing on its own, and could do the same in all sorts of organisms under all sorts of conditions.

And so it’s proved. GFP is wonderful stuff for marking proteins in living systems. Its sequence can be fused on to many other proteins without disturbing their function, it folds up just fine with no help to its active form, and it’s bright and very photoefficient. Where Roger Tsien enters the picture is in extending this idea to a whole family of proteins. Tsien worked out the last details of the fluorescent structure, showing that oxygen is needed for the last step. He and his group then set out to make mutant forms of the protein, changing the color of its fluorescence and other properties. He’s done the same thing with a red fluorescent protein from coral, and this work (which continues in labs all over the world) has led to a wide variety of in vivo fluorescent tags, which can be made to perform a huge number of useful tricks. They can sense calcium levels or the presence of various metabolites, fluoresce only when they come into contact with another specifically labeled protein, used in various time-resolved techniques to monitor the speed of protein trafficking, and who knows what else. A lot of what we’ve learned in the last fifteen years about the behavior of real proteins in living cells has come out of this work – the prize is well deserved.

I want to close with a bit of an interview with Martin Chalfie, which is an excellent insight into how things like this get discovered (or don't!)

Considering how significant GFP has been, why do you think no one else came up with it, while you were waiting for Doug Prasher to clone it?

"That’s a very important point. In hindsight, you wonder why 50 billion people weren’t working on this. But I think the field of bioluminescence or, in general, the research done on organisms and biological problems that have no immediate medical implications, was not viewed as being important science. People were working on this, but it was slow and tedious work, and getting enough protein from jellyfish required rather long hours at the lab. They had to devise ways of isolating the cells that were bioluminescent and then grinding them up and doing the extraction on them. It’s not like ordering a bunch of mice and getting livers out and doing an experiment. It was all rather arduous. It’s quite remarkable that it was done at all. It was mostly biochemists doing it, and they were not getting a lot of support. In fact, as I remember it, Doug Prasher had some funding initially from the American Cancer Society, and when that dried up he could not get grants to pursue the work. I never applied for a grant to do the original GFP research. Granting agencies would have wanted to see preliminary data and the work was outside my main research program. GFP is really an example of something very useful coming from a far-outside-the-mainstream source. And because this was coming from a non-model-organism system, these jellyfish found off the west coast of the U.S., people were not jumping at the chance to go out and isolate RNAs and make cDNAs from them. So we’re not talking about a field that was highly populated. It was not something that was widely talked about. At the time, there was a lot of excitement about molecular biology, but this was biochemistry. The discovery really was somewhat orthogonal to the mainstream of biological research."

Here's an entire site dedicated to the GFP story, full of illustrations and details. That interview with Chalfie is here, with some background on his part in the discovery. Science background from the Nobel Foundation is here (PDF), for those who want even more).

Comments (34) + TrackBacks (0) | Category: Biological News | Current Events

September 9, 2008

Antipsychotics: Do They Work For A Completely Different Reason?

Email This Entry

Posted by Derek

As I’ve noted here, and many others have elsewhere, we have very little idea how many important central nervous system drugs actually work. Antidepressants, antipsychotics, antiseizure medications for epilepsy – the real workings of these drugs are quite obscure. The standard explanation for this state of things is that the human brain is extremely complicated and difficult to study, and that’s absolutely right.

But there’s an interesting paper on antipsychotics that’s just come out from a group at Duke, suggesting that there’s an important common mechanism that has been missed up until now. One thing that everyone can agree on is that dopamine receptors are important in this area. Which ones, and how they should be affected (agonist, antagonist, inverse partial what-have-you) – now that’s a subject for argument, but I don’t think you’ll find anyone who says that the dopaminergic system isn’t a big factor. Helping to keep the argument going is the fact that the existing drugs have a rather wide spectrum of activity against the main dopamine receptors.

But for some years now, the D2 subtype has been considered first among equals in this area. Binding affinity to D2 correlates as well as anything does to clinical efficacy, but when you look closer, the various drugs have different profiles as inverse agonists and antagonists of the receptor. What this latest study shows, though, is that a completely different signaling pathway – other than the classic GPCR signaling one – might well be involved. A protein called beta-arrestin has long been known to be important in receptor trafficking – movement of the receptor protein to and from the cell surface. A few years ago, it was shown that beta-arrestin isn’t just some sort of cellular tugboat in these systems, but can participate in another signaling pathway entirely.

Dopamine receptors were already complicated when I worked on them, but they’ve gotten a lot hairier since then. The beta-arrestin work makes things even trickier: who would have thought that these GPCRs, with all of their well-established and subtle signaling modes, also participated in a totally different signaling network at the same time? It’s like finding out that all your hammers can also drive screws, using some gizmo hidden in their handles that you didn’t even know was there.

When this latest team looked at the various clinical antipsychotics, what they found was that no matter what their profile in the traditional D2 signaling assays, they all are very good at disrupting the D2/beta-arrestin pathway. Since some of the downstream targets in that pathway (a protein called Akt and a kinase, GSK-3) have already been associated with schizophrenia, this may well be a big factor behind antipsychotic efficacy, and one that no one in the drug discovery business has paid much attention to. As soon as someone gets this formatted for a high-throughput assay, though, that will change – and it could lead to entirely new compound classes in this area.

Of course, there’s still a lot that we don’t know. What, for example, does beta-arrestin signaling actually do in schizophrenia? Akt and GSK-3 are powerful signaling players, involved in all sorts of pathways. Untangling their roles, or the roles of other yet-unknown beta-arrestin driven processes, will keep the biologists busy for a good long while. And the existing antipsychotics hit quite a few other receptors as well – what’s the role of the beta-arrestin system in those interactions? The brain will keep us busy for a good long while, and so will the signaling receptors.

Comments (6) + TrackBacks (0) | Category: Biological News | The Central Nervous System

August 26, 2008

New, Improved DNA?

Email This Entry

Posted by Derek

As all organic chemists who follow the literature know, over the last few years there’s been a strong swell of papers using Barry Sharpless’s “click chemistry” triazole-forming reactions. These reaction let you form five-membered triazole rings from two not-very-reactive partners, an azide and an acetylene, and people have been putting them to all kinds of uses, from the trivial to the very interesting indeed.

In the former category are papers that boil down to “We made triazoles from some acetylenes and azides that no one else has gotten around to using yet, and here they are, for some reason”. There are fewer of those publications than there were a couple of years ago, but they’re still out there. For its part, the latter (interesting) category is really all over the place, from in vivo biological applications to nanotechnology and materials science.

One recent paper in Organic Letters which was called to my attention starts off looking as if it’s going to be another bit of flotsam from the first group, but by the end it’s a very different thing indeed. The authors (from the Isobe group at Tohoku University in Japan, with collaborators from Tokyo) have made an analog of thymine, the T in the genetic code, where the 2-deoxyribose part has both an azide and an acetylene built onto it.

So far, so good, and at one point you probably could have gotten a paper out of things right there – let ‘em rip to make a few poly-triazole things and send off the manuscript. But this is a more complete piece of work. For one thing, they’ve made sure that their acetylenes can have removable silyl groups on them. That lets you turn their click reactivity on and off, since the copper-catalyzed reaction needs a free alkyne out there. So starting from a resin-supported sugar, they did one triazole click reaction after another in a controlled fashion – it took some messing around with the conditions, but they worked it out pretty smoothly.

And since the acetylene was at the 5 position of the sugar, and the azide was at the 3, they built a sort of poly-T oligonucleotide – but one that’s linked together by triazoles where instead of the phosphate groups found in DNA. People have, of course, made all sorts of DNA analogs, with all sorts of replacements for the phosphates, but they vary in how well they mimic the real thing. Startlingly, when they took a 10-mer of their “TL-DNA” (triazole-linked) and exposed it to a complementary 10-residue strand of good ol' poly-A DNA, the two zipped right up. In fact, the resulting helix seems to be significantly stronger than native DNA, as measured by a large increase in melting point. (That's their molecular model of the complex below left).Triazole%20DNA.jpg

Well, after reading this paper, my first thought was that it might eventually make me eat some of my other words. Because just last week I was saying things about the prospects for nucleic acid therapies (RNAi, antisense) - mean, horrible, nasty things, according to a few of the comments that piled up, about how these might be rather hard to implement. But when I saw the end of this paper, the first thing that popped into my head was "stable high-affinity antisense DNA backbone. Holy cow". I assume that this also crossed the minds of the authors, and of some of the paper's other readers. Given the potential of the field, I would also assume that eventually we'll see that idea put to a test. It's a long way from being something that works, but it sure looks like a good thing to take a look at, doesn't it?

Comments (10) + TrackBacks (0) | Category: Biological News

July 16, 2008

Receptors: Can't Live With 'Em, Can't Understand 'Em

Email This Entry

Posted by Derek

At various points in my drug discovery career, I’ve worked on G-protein-coupled receptor (GPCR) targets. Most everyone in the drug industry has at some point – a significant fraction of the known drugs work through them, even though we have a heck of a time knowing what their structures are like.

For those outside the field, GPCRs are a ubiquitous mode of signaling between the interior of a cell and what’s going on outside it, which accounts for the hundreds of different types of the things. They’re all large proteins that sit in the cell membrane, looped around so that some of their surfaces are on the outside and some poke through to the inside. The outside folds have a defined binding site for some particular ligand - a small molecule or protein – and the inside surfaces interact with a variety of other signaling proteins, first among them being the G-proteins of the name. When a receptor’s ligand binds from the outside, that sets off some sort of big shape change. The protein’s coils slide and shift around in response, which changes its exposed surfaces and binding patterns on the inside face. Suddenly different proteins are bound and released there, which sets off the various chemical signaling cascades inside the cell.

The reason we like GPCRs is that many of them have binding sites for small molecules, like the neurotransmitters. Dopamine, serotonin, acetylcholine – these are molecules that medicinal chemists can really get their hands around. The receptors that bind whole other proteins as external ligands are definitely a tougher bunch to work with, but we’ve still found many small molecules that will interact with some of them.

Naturally, there are at least two modes of signaling a GPCR can engage in: on and off. A ligand that comes in and sets off the intracellular signaling is called an agonist, and one that binds but doesn’t set off those signals is called an antagonist. Antagonist molecules will also gum up the works and block agonists from doing their things. We have an easier time making those, naturally, since there are dozens of ways to mess up a process compared to the ways there are of running it correctly!

Now, when I was first working in the GPCR field almost twenty years ago, it was reasonably straightforward. You had your agonists and you had your antagonists – well, OK, there were those irritating partial agonists, true. Those things set off the desired cellular signal, but never at the levels that a full agonist would, for some reason. And there were a lot of odd behaviors that no one quite knew how to explain, but we tried to not let those bother us.

These days, it’s become clear that GPCRs are not so simple. There appear to be some, for example, whose default setting is “on”, with no agonist needed. People are still arguing about how many receptors do this in the wild, but there seems little doubt that it does go on. These constituitively active receptors can be turned off, though, by the binding of some ligands, which are known as inverse agonists, and there are others, good old antagonists, that can block the action of the inverse agonists. Figuring out which receptors do this sort of thing - and which drugs - is a full time job for a lot of people.

It’s also been appreciated in recent years that GPCRs don’t just float around by themselves on the cell surface. Many of them interact with other nearby receptors, binding side-by-side with them, and their activities can vary depending on the environment they’re in. The search is on for compounds that will recognize receptor dimers over the good ol’ monomeric forms, and the search is also on for figuring out what those will do once we have them. To add to the fun, these various dimers can be with other receptors of their own kind (homodimers) or with totally different ones, some from different families entirely (heterodimers). This area of research is definitely heating up.

And recently, I came across a paper which looked at how a standard GPCR can respond differently to an agonist depending on where it's located in the membrane. We're starting to understand how heterogeneous the lipids in that membrane are, and that receptors can move from one domain to another depending on what's binding to them (either on their outside or inside faces). The techniques to study this kind of thing are not trivial, to put it mildly, and we're only just getting started on figuring out what's going on out there in the real world in real time. Doubtless many bizarre surprises await.

So, once again, the "nothing is simple" rule prevails. This kind of thing is why I can't completely succumb to the gloom that sometimes spreads over the industry. There's just so much that we don't know, and so much to work on, and so many people that need what we're trying to discover, that I can't believe that the whole enterprise is in as much trouble as (sometimes) it seems. . .

Comments (20) + TrackBacks (0) | Category: Biological News | Drug Assays

May 22, 2008

Killing Proteins Wholesale

Email This Entry

Posted by Derek

Benjamin Cravatt at Scripps has another interesting paper out this week – by my standards, he hasn’t published very many dull ones. I spoke about some earlier work of his here, where his group tried to profile enzymes in living cells and found that the results they got were much different than the ones seen in their model systems.

This latest paper is in the same vein, but addresses some more general questions. One of his group members (Eranthi Weerapana, who certainly seems to have put in some lab time) started by synthesizing five simple test compounds. Each of them had a reactive group on them, and each molecule had an acetylene on the far end. The idea was to see what sorts of proteins combined with the reactive head group. After labeling, a click-type triazole reaction stuck a fluorescent tag on via the acetylene group, allowing the labeled proteins to be detected.

All this is similar to the previous paper I blogged about, but in this case they were interested in profiling these varying head groups: a benzenesulfonate, an alpha-chloroamide, a terminal enone, and two epoxides – one terminal on a linear chain, and the other a spiro off a cyclohexane. All these have the potential to react with various nucleophilic groups on a protein – cysteines, lysines, histidines, and so on. Which reactive groups would react with which sorts of protein residues, and on which parts of the proteins, was unknown.

There have been only a few general studies of this sort. The most closely related work is from Daniel Liebler at Vanderbilt, who's looking at this issue from a toxicology perspective ( try here , here, and here). And an earlier look at different reactive groups from the Sames lab at Columbia is here, but that was much less extensive.

Cravatt's study reacted these probes first with a soluble protein mix from mouse liver – containing who knows how many different proteins – and followed that up with similar experiments with protein brews from heart and kidney, along with the insoluble membrane fraction from the liver. A brutally efficient proteolysis/mass spectroscopy technique, described by Cravatt in 2005, was used to simultaneously identify the labeled proteins and the sites at which they reacted. This is clearly the sort of experiment that would have been unthinkable not that many years ago, and it still gives me a turn to see only Cravatt, Weerapana, and a third co-author (Gabriel Simon) on this one instead of some lab-coated army.

Hundreds of proteins were found to react, as you might expect from such simple coupling partners. But this wasn’t just a blunderbuss scatter; some very interesting patterns showed up. For one thing, the two epoxides hardly reacted with anything, which is quite interesting considering that functional group’s reputation. I don’t think I’ve ever met a toxicologist who wouldn’t reject an epoxide-containing drug candidate outright, but these groups are clearly not as red-hot as they’re billed. The epoxide compounds were so unreactive, in fact, that they didn’t even make the cut after the initial mouse liver experiment. (Since Cravatt’s group has already shown that more elaborate and tighter-binding spiro-epoxides can react with an active-site lysine, I’m willing to bet that they were surprised by this result, too).

The next trend to emerge was that the chloroamide and the enone, while they labeled all sorts of proteins, almost invariably did so on their cysteine (SH) residues. Again, I think if you took a survey of organic chemists or enzymologists, you’d have found cysteines at the top of the expected list, but plenty of other things would have been predicted to react as well. The selectivity is quite striking. What’s even more interesting, and as yet unexplained, is that over half the cysteine residues that were hit only reacted with one of the two reagents, not the other. (Leibler has seen similar effects in his work).

Meanwhile, the sulfonate went for several different sorts of amino acid residues – it liked glutamates especially, but also aspartate, cysteine, tyrosine, and some histidines. One of the things I found striking about these results is how few lysines got in on the act with any of the electrophiles. Cravatt's finely tuned epoxide/lysine interaction that I linked to above turns out, apparently, to be a rather rare bird. I’ve always had lysine in my mind as a potentially reactive group, but I can see that I’m going to have adjust my thinking.

Another trend that I found thought-provoking was that the labeled residues were disproportionately taken from the list of important ones, amino acids that are involved in the various active sites or in regulatory domains. The former may be intrinsically more reactive, in an environment that has been selected to increase their nucleophilicity. And as for the latter, I’d think that’s because they’re well exposed on the surfaces of the proteins, for one thing, although they may also be juiced up in reactivity compared to their run-of-the-mill counterparts.

Finally, there’s another result that reminded me of the model-system problems in Cravatt’s last paper. When they took these probes and reacted them with mixtures of amino acid derivatives in solution, the results were very different than what they saw in real protein samples. The chloroamide looked roughly the same, attacking mostly cysteines. But the sulfonate, for some reason, looked just like it, completely losing its real-world preference for carboxylate side chains. Meanwhile, the enone went after cysteine, lysine, and histidine in the model system, but largely ignored the last two in the real world. The reasons for these differences are, to say the least, unclear – but what’s clear, from this paper and the previous ones, is that there is (once again!) no substitute for the real world in chemical biology. (In fact, in that last paper, even cell lysates weren’t real enough. This one has a bit of whole-cell data, which looks similar to the lysate stuff this time, but I’d be interested to know if more experiments were done on living systems, and how close they were to the other data sets).

So there are a lot of lessons here - at least, if you really get into this chemical biology stuff, and I obviously do. But even if you don't, remember that last one: run the real system if you're doing anything complicated. And if you're in drug discovery, brother, you're doing something complicated.

Comments (6) + TrackBacks (0) | Category: Biological News | Toxicology

May 19, 2008

Empty As Can Be

Email This Entry

Posted by Derek

OK, drugs generally bind to some sort of cavity in a protein. So what’s in that cavity when the drug isn’t there? Well, sometimes it’s the substance that the drug is trying to mimic or block, the body’s own ligand doing what it’s supposed to be doing. But what about when that isn’t occupying the space – what is?

A moment’s thought, and most chemists and biologists will say “water”. That’s mostly true, although it can give a false impression. When you get X-ray crystal structures of enzymes, there’s always water hanging around the protein. But at this scale, any thoughts of bulk water as we know it are extremely misleading. Those are individual water molecules down there, a very different thing.

There seem to be several different sorts of them, for one thing. Some of those waters are essential to the structure of the protein itself – they form hydrogen bonds between key residues of its backbone, and you mess with them at your peril. Others are adventitious, showing up in your X-ray structure in the same way that pedestrians show up in a snapshot of a building’s lobby. (That’s a good metaphor, if I do say so myself, but to work that first set of water molecules into it, you’d have to imagine people stuck against the walls with their arms spread, helping to hold up the building).

And in between those two categories are waters that can interact with both the protein and your drug candidate. They can form bridges between them, or they can be kicked out so that your drug interacts directly. Which is better? Unfortunately, it’s hard to generalize. There are potent compounds that sit in a web of water molecules, and there are others that cozy right up to the protein at every turn.

But there's one oddity that just came out in the literature. This one's weird enough to deserve its own paper: the protein beta-lactoglobulin appears to have a large binding site that's completely empty of water molecules. It's a site for large lipids to bind, so it makes sense that it would be a greasy environment that wouldn't be friendly to a lot of water, but completely empty? That's a first, as far as I know. When you think about it, that's quite weird: inside that protein is a small zone that's a harder vacuum than anything even seen in the lab: there's nothing there at all. It's a small bit of interstellar space, sitting inside a protein from cow blood. Nature abhors a vacuum, but apparently not this one.

Comments (14) + TrackBacks (0) | Category: Biological News

May 16, 2008

Nanotech Stem Cells, Order Now!

Email This Entry

Posted by Derek

A good rule to follow: hold onto your wallet when two exciting, complicated fields of research are combined. Nature reported earlier this spring on a good example of this, the announcement by a small biotech called Primegen that they'd used carbon nanotubes to reprogram stem cells. (Here's a good article from VentureBeat on the same announcement, and there's an excellent piece on the announcement and the company in Forbes).

Stem cells and nanostructures are two undeniably hot areas of research. And also undeniable is that fact that they're both in their very early days - the amount of important information we don't know about both of these topics must be really impressive, which is why so many people are beavering away at them. So what are the odds of getting them to work together? Not as good as the odds that someone thought the combination would make a good press release, I'm afraid.

The PrimeGen web site, though a bit better than that VentureBeat article describes it, still has some odd notes to it. I particularly like this phrase: "PrimeGen’s broad intellectual property portfolio is founded on groundbreaking platform technologies invented by our team of dedicated and visionary scientists." Yep, we talk that way all the time in this business. You also have to raise an eyebrow at this part: "Disease and injury applications of PrimeCell™ include Alzheimer’s Disease, Cardiac Disease, Diabetes, Lupus, Multiple Sclerosis, Leukemia, Muscular Dystrophy, Parkinson’s Disease, Rheumatoid Arthritis, Spinal Cord Injury, Autoimmune Disease, Stroke, Skin Regeneration and Wound Healing." It'll mow your yard, too, if you're willing to participate in the next funding round.

The next sentence is the key one: "The extent to which stem cells can be used to treat injury and illness has yet to be fully evaluated. . ." You can say that again! In fact, I wouldn't mind seeing that in 36-point bold across the top of every stem cell company web page and press release. But what are the chances of that? As good as the chance that nanotechnology will suddenly going provide us a way to make the stem cells do what we want, I'm afraid. . .

Comments (11) + TrackBacks (0) | Category: Biological News | Press Coverage

March 28, 2008

RNA Interference: Even Trickier Than You Thought

Email This Entry

Posted by Derek

It’s been a while since I talked about RNA interference here. It’s still one of those tremendously promising therapeutic ideas, and it’s still having a tremendously hard time proving itself. Small RNA molecules can do all sorts of interesting and surprising things inside cells, but the trick is getting them there. Living systems are not inclined to let a lot of little nucleic acid sequences run around unmolested through the bloodstream.

The RNA folks can at least build on the experience (long, difficult, expensive) of the antisense DNA people, who have been trying to dose their compounds for years now and have tried out all sorts of ingenious schemes. But even if all these micro-RNAs could be dosed, would we still know what they’re going to do?

A report in the latest Nature suggests that the answer is “not at all”. This large multi-university group was looking at macular degeneration, a natural target for this sort of technology. It’s a serious disease, and it occurs in a privileged compartment of the body, the inside of the eye. You can inject your new therapy directly in there, for example (I know, it gives me the shivers, too, but it sure beats going blind). That bypasses the gut, the liver, and the bloodstream, and that humoral fluid of the eye is comparatively free of hostile enzymes. (It’s no coincidence that the antisense and aptamer people have gone after this and other eye diseases as well).

Angiogenesis is a common molecular target for macular regeneration, since uncontrolled formation of new capillaries is a proximate cause of blindness in such conditions. (That target has the added benefit of giving your therapy a possible entry into the oncology world, should you figure out how to get it to work well here). VEGF is the prototype angiogenesis target, so you’d figure that RNA interference targeting VEGF production or signaling would work as well as anything could, as a first guess.

And so it does, as this team found out. But here comes the surprise: when the researchers checked their control group, using a similar RNA that should have been ineffective, they found that it was working just fine, too – just as well as the VEGF-targeted ones, actually. Baffled, they went on to try a host of other RNAs. Reading the paper, you can just see the disbelief mounting as they tried various sequences against other angiogenic targets (success!), nonangiogenic proteins (success!?), proangiogenic ones that should make the disease worse (success??), genes for proteins that aren’t even expressed in the eye (success!), sequences against RNAs from plants and microbes that don’t even exist in humans at all (oh God, success again), totally random RNAs (success, damnit), and RNAs that shouldn’t be able to silence anything because they’ve got completely the wrong sort of sequence (oh the hell with it, success). Some of these even worked when injected i.p., into the gut cavity, instead of into the eye at all, suggesting that this was a general mechanism that had nothing to do with the retina.

As it turns out, these things are acting through hitting a cell surface receptor, TLR3. And all you need, apparently, is a stretch of RNA that’s at least 21 units long. Doesn’t seem to matter much what the sequence is – thus all that darn success with whatever they tried. Downstream of TLR3 come induction of gamma-interferon and IL-12, and those are what are doing the job of shutting down angiogenesis. (Off-target effects involving these have been noted before with siRNA, but now I think we’re finally figuring out why).

What does this all mean? Good news and bad news. The companies that are already dosing RNAi therapies for macular degeneration have just discovered that there's an awful lot that they don't know about what they're doing, for one thing. On the flip side, there are a lot of human cell types with TLR3 receptors on them, and a lot of angiogenic disorders that could potentially be treated, at least partially, by targeting them in this manner. That’s some good news. The bad news is that most of these receptors are present in more demanding environments than the inside of the eye, so the whole problem of turning siRNAs into drugs still looms large.

And the other bad news is that if you do figure out a way to dose these things, you may well set off TLR3 effects whether you want them or not. Immune system effects on the vasculature are not the answer to everything, but that may be one of the answers you always get. And this sort of thing makes you wonder what other surprising things systemic RNA therapies might set off. We will, in due course, no doubt find out. More here from John Timmer at Nobel Intent, who correctly tags this as a perfect example of why you want to run a lot of good control experiments. . .

Comments (4) + TrackBacks (0) | Category: Biological News | Drug Development

February 14, 2008

Getting Real With Real Cells

Email This Entry

Posted by Derek

I’ve been reading an interesting paper from JACS with the catchy title of “Optimization of Activity-Based Probes for Proteomic Profiling of Histone Deacetylase Complexes”. This is work from Benjamin Cravatt's lab at Scripps, and it says something about me, I suppose, that I found that title of such interest that I immediately printed off a copy to study more closely. Now I’ll see if I can interest anyone who wasn’t already intruiged! First off, some discussion of protein tagging, so if you’re into that stuff already, you may want to skip ahead.

So, let’s say you have a molecule that has some interesting biological effect, but you’re not sure how it works. You have suspicions that it’s binding to some protein and altering its effects (always a good guess), but which protein? Protein folks love fluorescent assays, so if you could hang some fluorescent molecule off one end of yours, perhaps you could start the hunt: expose your cells to the tagged molecule, break them open, look for the proteins that glow. There are complications, though. You’d have to staple the fluorescent part on in a way that didn’t totally mess up that biological activity you care about, which isn’t always easy (or even possible). The fact that most of the good fluorescent tags are rather large and ugly doesn’t help. But there’s more trouble: even if you manage to do that, what’s to keep your molecule from drifting right back off of the protein while you’re cleaning things up for a look at the system? Odds are it will, unless it has a really amazing binding constant, and that’s not the way to bet.

One way around that problem is sticking yet another appendage on to the molecule, a so-called photoaffinity label. These groups turn into highly reactive species on exposure to particular wavelengths of light, ready to form a bond with the first thing they see. If your molecule is carrying one when it’s bound to your mystery protein, shining light on the system will likely cause a permanent bond to form between the two. Then you can do all your purifications and separations, and look at your leisure for which proteins fluoresce.

This is “activity-based protein profiling”, and it’s a hot field. There are a lot of different photoaffinity labels, and a lot of ways to attach them, and likewise with the fluorescent groups. The big problem, as mentioned above, is that it’s very hard to get both of those on your molecule of interest and still keep its biological activity – that’s an awful lot of tinsel to carry around. One slick solution is to use a small placeholder for the big fluorescent part. This, ideally, would be some little group that will hide out innocently during the whole protein-binding and photoaffinity-labeling steps, then react with a suitably decorated fluorescent partner once everything’s in place. This assembles your glowing tag after the fact.

A favorite way to do that step is through an azide-acetylene cycloaddition reaction, the favorite of Barry Sharpless’s “click” reactions. Acetylenes are small and relatively unreactive, and at the end of the process, after you’ve lysed the cells and released all their proteins, you can flood your system with azide-substituted fluorescent reagent. The two groups react irreversibly under mild catalytic conditions to make a triazole ring linker, which is a nearly ideal solution that’s getting a lot of use these days (more on this another day).

So, now to this paper. What this group did was label a known compound (from Ron Breslow's group at Columbia) that targets histone deacetylase (HDAC) enzymes, SAHA, now on the market as Vorinostat. There are a lot of different subtypes of HDAC, and they do a lot of important but obscure things that haven’t been worked out yet. It’s a good field to discover protein function in.

When they modified SAHA in just the way described above, with an acetylene and a photoaffinity group, it maintained its activity on the known enzymes, so things looked good. They then exposed it to cell lysate, the whole protein soup, and found that while it did label HDAC enzymes, it seemed to label a lot of other things in the background. That kind of nonspecific activity can kill an assay, but they tried the label out on living cells anyway, just to see what would happen.

Very much to their surprise, that experiment led to much cleaner and more specific labeling of HDACs. The living system was much nicer than the surrogate, which (believe me) is not how things generally go. Some HDACs were labeled much more than others, though, and my first thought on reading that was “Well, yeah, sure, your molecule is a more potent binder to some of them”.

But that wasn’t the case, either. When they profiled their probe molecule’s activity versus a panel of HDAC enzymes, they did indeed find different levels of binding – but those didn’t match up with which ones were labeled more in the cells. (One explanation might be that the photoaffinity label found some of the proteins easier to react with than others, perhaps due to what was nearby in each case when the reactive species formed).

Their next step was to make a series of modified SAHA scaffolds and rig them up with the whole probe apparatus. Exposing these to cell lysate showed that many of them performed fine, labeling HDAC subtypes as they should, and with different selectivities than the original. But when they put these into cells, none of them worked as well as the plain SAHA probe – again, rather to their surprise. (A lot of work went into making and profiling those variations, so I suspect that this wasn’t exactly the result the team had hoped for - my sympathies to Cravatt and especially to his co-author Cleo Salisbury). The paper sums the situation up dryly: "These results demonstrate that in vitro labeling is not necessarily predictive of in situ labeling for activity-based protein profiling probes".

And that matches up perfectly with my own prejudices, so it must be right. I've come to think, over the years, that the way to go is to run your ideas against the most complex system you think that they can stand up to - in fact, maybe one step beyond that, because you may have underestimated them. A strict reductionist might have stopped after the cell lysate experiments in this case - clearly, this probe was too nonspecific, no need to waste time on the real system, eh? But the real system, the living cell, is real in complex ways that we don't understand well at all, and that makes this inference invalid.

The same goes for medicinal chemistry and drug development. If you say "in vitro", I say "whole cells". If you've got it working in cells, I'll call for mice. Then I'll see your mice and raise you some dogs. Get your compounds as close to reality as you can before you pass judgment on them.

Comments (5) + TrackBacks (0) | Category: Biological News | Drug Assays | Drug Development

January 8, 2008

Rainbows and Fishing Expeditions

Email This Entry

Posted by Derek

I came across a neat article in Nature from a group working on a new technique in neuroscience imaging. They expressed an array of four differently colored fluorescent proteins in developing neurons in vivo, and placed them so that recombination events would scramble the relative expression of the multiple transgenes as the cell population expands. That leads to what they’re calling a “brainbow”: a striking array of about a hundred different shades of fluorescent neurons, tangled into what looks like a close-up of a Seurat painting.

The good part is that the entire neuron fluoresces, not just a particular structure inside it. Being able to see all those axons opens up the possibility of tracking how the cells interact in the developing brain – where synapses form and when. That should keep everyone in this research group occupied for a good long while.

What I particularly enjoyed, though, was the attitude of the lab head, Jeff Lichtman of Harvard. He states that he doesn’t really know exactly what they’re looking for, but that this technique will allow them to just sit back and see what there is to see. That’s a scientific mode with a long history, basically good old Francis-Bacon style induction, but we don’t actually get a chance to do it as much as you’d think.

That varies by the area being under investigation. In general, the more complex and poorly understood the object of study, the more appropriate it is to sit back and take notes, rather than go in trying to prove some particular hypothesis. (Neuroscience, then, is a natural!) In a chemistry setting, though, I wouldn’t recommend setting up five thousand sulfonamide formations just to see what happens, because we already have a pretty good idea of what’ll happen. But if you’re working on new metal-catalyzed reactions, a big screen of every variety of metal complex you can find might not be such a bad idea, if you’ve got the time and material. There’s a lot that we don’t know about those things, and you could come across an interesting lead.

Some people get uncomfortable with “fishing expedition” work like this, though. In the med-chem labs, I’ve seen some fishy glances directed at people who just made a bunch of compounds in a series because no one else had made them and they just wanted to see what would happen. While I agree that you don’t want to run a whole project like that, I think that the suspicion is often misplaced, considering how many projects start from high-throughput screening. We don’t, a priori, usually have any good idea of what molecules should bind to a new drug target. Going in with an advanced hypothesis-driven approach often isn’t as productive as just saying “OK, let’s run everything we’ve got past the thing, see what sticks, and take it from there”.

But the feeling seems to be that a drug project (and its team members) should somehow outgrow the random approach as more knowledge comes in. Ideally, that would be the case. I’m not convinced, though, that enough med-chem projects generate enough detailed knowledge about what will work and what won’t to be able to do that. (There’s no percentage in beating against structural trends that you have evidence for, but trying out things that no one’s tried yet is another story). It’s true that a project has to narrow down in order to deliver a lead compound to the clinic, but getting to the narrowing-down stage doesn’t have to be (and usually isn’t) a very orderly process.

Comments (8) + TrackBacks (0) | Category: Biological News | Drug Development | The Central Nervous System | Who Discovers and Why

December 5, 2007

Avandia: Going Under for the Third Time?

Email This Entry

Posted by Derek

How many hits can a drug – or a whole class of drugs – take? Avandia (rosiglitazone) has been the subject of much wrangling about cardiovascular risk in its patient population of Type II diabetics. But there have also been scattered reports of increases in fractures among people taking it or Actos (pioglitazone), the other drug with the same mechanism of action.

Now Ron Evans and his co-workers at Salk, who know about as much PPAR-gamma biology as there is to know, have completed a difficult series of experiments that provides some worrying data about what might be going on. Studying PPAR-gamma’s function in mice is tricky, since you can’t just step in and knock it out (that’s embryonic lethal), and its function varies depending on the tissue where it’s expressed. (That latter effect is seen across many other nuclear receptors, which is just one of the things that make their biology so nightmarishly complex).

So tissue-specific knockouts are the way to go, but the bones are an interesting organ. The body is constantly laying down new bone tissue and reabsorbing the old. Evans and his team managed to knock out the system in osteoclasts (the bone-destroying cells), but not osteoblasts (the bone-forming ones). It’s been known for years that PPAR-gamma has effects on the development of the latter cells, which makes sense, because it also affects adipocytes (fat cells), and those two come from the same lineage. But no one’s been able to get a handle on what it does in osteoclasts, until now.

It turns out that without PPAR-gamma, the bones of the mice turned out larger and much more dense than in wild-type mice. (That’s called osteopetrosis, a word that you don’t hear very much compared to its opposite). Examining the tissue confirmed that there seemed to be normal numbers of osteoblasts, but far fewer osteoclasts to reabsorb the bone that was being produced. Does PPAR stimulation do the opposite? Unfortunately, yes – there had already been concern about possible effects on bone formation because of the known effects on osteoblasts, but it turned out that dosing rosiglitazone in mice actually stimulates their osteoclasts. This double mode of action, which was unexpected, speeds up the destruction of bone and at the same time slow down its formation. Not a good combination.

So there’s a real possibility that long-term PPAR-gamma agonist use might lead to osteoporosis in humans. If this is confirmed by studies of human osteoclast activity, that may be it for the glitazones. They seem to have real benefit in the treatment of diabetes, but not with these consequences. Suspicion of cardiovascular trouble, evidence of osteoporosis – diabetic patients have enough problems already.

As I’ve mentioned here before, I think that PPAR biology is a clear example of something that has turned out to be (thus far) too complex for us to deal with. (Want a taste? Try this on for size, and let me assure that this is a painfully oversimplified diagram). We don’t understand enough of the biology to know what to target, how to target it, and what else might happen when we do. And we've just proven that again. I spent several years working in this field, and I have to say, I feel safer watching it from a distance.

Comments (8) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Toxicology

November 11, 2007

A Real Genetic Headscratcher

Email This Entry

Posted by Derek

As you root through genomic sequences - and there are more and more of them to root through these days - you come across some stretches of DNA that hardly seem to vary at all. The hard-core "ultraconserved" parts, first identified in 2004, are absolutely identical between mice, rats, and humans. Our last common ancestor was rather a long time ago (I know, I know - everyone works with some people who seem to be exceptions, but bear with me), so these things are rather well-preserved.

Even important enzyme sequences vary a bit among the three species, so what could these pristine stretches (some of which are hundreds of base pairs long) be used for? The assumption, naturally, has been that whatever it is, it must be mighty important, but if we're going to be scientists, we can't just go around assuming that what we think must be right. A team at Lawrence Berkeley and the DOE put things to the test recently by identifying four of the ultraconserved elements that all seem to be located next to critical genes - and deleting them.

The knockout mice turned out to do something very surprising indeed. They were born normally, but then they grew up normally. When they reached adulthood, though, they were completely normal. Exhaustive biochemical and behavioral tests finally uncovered the truth: they're basically indistinguishable from the wild type. Hey, I told you it was surprising. This must have been the last thing that the researchers expected.

Reaction to these results has been a series of raised eyebrows and furrowed foreheads. Deleting any of the known genes near the ultraconserved sequences confirms that they, anyway, are as important as they're billed to be. And these genes show the usual level of difference that you see among the three species. So what's this unchanged, untouchable, but apparently disposable stuff in there with them?

No one knows. And it's a real puzzle, the answer to which is going to be tangled up with a lot of our basic ideas about genes and evolution. To a good first approximation, it's hard to see how (or why) something like this should be going on. So what, exactly, are we missing? Something important? And if so, what else have we missed, too?

Comments (88) + TrackBacks (0) | Category: Biological News

October 29, 2007

What We Don't Know About Enzymes

Email This Entry

Posted by Derek

There was an intriguing paper published earlier this month from Manfred Reetz and co-workers at the Max Planck Institute. It's not only an interesting finding, but a good example of making lemonade from lemons.

They were looking at an enzyme called tHisF, a thermostable beast from a marine microorganism that's normally involved in histamine synthesis. It has an acid/base catalytic site, so Reetz's group, which has long been involved in pushing enzymes to do more than they usually do, was interested in seeing if this one would act as an esterase/hydrolase.

And so it did - not as efficiently as a real esterase, but not too shabby when given some generic nitrophenyl esters to chew on. There was some structure-activity trend at work: the larger the alkyl portion of the ester, the less the enzyme liked it. Given a racemic starting material, it did a good job of resolution, spitting out the R alcohol well over the S isomer. All just the sort of thing you'd expect from a normal enzyme.

Next, they used the crystal structure of the protein and previous work on the active site to see which amino acids were important for the esterase activity. And here's where the wheels came off. They did a series of amputations to all the active side chains, hacking aspartic acids and cysteines down to plain old alanine. And none of it did a thing. To what was no doubt a room full of shocked expressions, the enzyme kept rolling along exactly as before, even with what were supposed to be its key parts missing.

Further experiments confirmed that the active site actually seems to have nothing at all to do with the hydrolase activity. So what's doing it? They're not sure, but there must be some other non-obvious site that's capable of acting like a completely different enzyme. I'm sure that they're actively searching for it now, probably by doing a list of likely point mutations until they finally hit something that stops the thing.

So how often does this sort of thing happen? Are there other enzymes with "active sites" that no one's ever recognized? If so, do these have any physiological relevance? No one knows yet, but a whole new area of enzymology may have been opened up. I look forward to seeing more publications on this, and I'll enjoy them all the more knowing that they came from a series of frustrating, head-scratching "failed" experiments. Instead of pouring things into the waste can, Reetz and his co-workers stayed the course, and my hat's off to them.

Comments (10) + TrackBacks (0) | Category: Biological News

October 15, 2007

Checking The Numbers on the Alzheimer's Test

Email This Entry

Posted by Derek

The news of a possible diagnostic test for Alzheimer’s disease is very interesting, although there’s always room to wonder about the utility of a diagnosis of a disease for which there is little effective therapy. The sample size for this study is smaller than I’d like to see, but the protein markers that they’re finding seem pretty plausible, and I’m sure that many of them will turn out to have some association with the disease.

But let’s run some numbers. The test was 91% accurate when run on stored blood samples of people who were later checked for development of Alzheimer’s, which compared to the existing techniques is pretty good. Is it good enough for a diagnostic test, though? We’ll concentrate on the younger elderly, who would be most in the market for this test.The NIH estimates that about 5% of people from 65 to 74 have AD. According to the Census Bureau (pdf), we had 17.3 million people between those ages in 2000, and that’s expected to grow to almost 38 million in 2030. Let’s call it 20 million as a nice round number.

What if all 20 million had been tested with this new method? We’ll break that down into the two groups – the 1 million who are really going to get the disease and the 19 million who aren’t. When that latter group gets their results back, 17,290,000 people are going to be told, correctly, that they don’t seem to be on track to get Alzheimer’s. Unfortunately, because of that 91% accuracy rate, 1,710,000 people are going to be told, incorrectly, that they are. You can guess what this will do for their peace of mind. Note, also, that almost twice as many people have just been wrongly told that they’re getting Alzheimer’s than the total number of people who really will.

Meanwhile, the million people who really are in trouble are opening their envelopes, and 910,000 of them are getting the bad news. But 90,000 of them are being told, incorrectly, that they’re in good shape, and are in for a cruel time of it in the coming years.

The people who got the hard news are likely to want to know if that’s real or not, and many of them will take the test again just to be sure. But that’s not going to help; in fact, it’ll confuse things even more. If that whole cohort of 1.7 million people who were wrongly diagnosed as being at risk get re-tested, about 1.556 million of them will get a clean test this time. Now they have a dilemma – they’ve got one up and one down, and which one do you believe? Meanwhile, nearly 154,000 of them will get a second wrong diagnosis, and will be more sure than ever that they’re on the list for Alzheimer’s.

Meanwhile, if that list of 910,000 people who were correctly diagnosed as being at risk get re-tested, 828 thousand of them will hear the bad news again and will (correctly) assume that they’re in trouble. But we’ve just added to the mixed-diagnosis crowd, because almost 82,000 people will be incorrectly given a clean result and won’t know what to believe.

I’ll assume that the people who got the clean test the first time will not be motivated to check again. So after two rounds of testing, we have 17.3 million people who’ve been correctly given a clean ticket, and 828,000 who’ve been correctly been given the red flag. But we also have 154,000 people who aren’t going to get the disease but have been told twice that they will, 90,000 people who are going to get it but have been told that they aren’t, and over 1.6 million people who have been through a blender and don’t know anything more than when they started.

Sad but true: 91% is just not good enough for a diagnostic test. And getting back to that key point in the first paragraph, would 100% be enough for a disease that we can't do anything about? Wait for an effective therapy, is my advice, and for a better test.

Update: See the comments for more, because there's more to it than this. For one thing, are the false positive and false negative rates for this test the same? (That'll naturally make a big difference). And how about differential diagnosis, using other tests to rule out similar conditions? On the should-you-know question, what about the financial and estate planning implications of a positive test - shouldn't those be worth something? (And there's another topic that no one's brought up yet: suicide, which you'd have to think would be statistically noticeable. . .)

Comments (21) + TrackBacks (0) | Category: Alzheimer's Disease | Biological News

October 11, 2007

Let Us Now Turn To the Example of Yo' Mama

Email This Entry

Posted by Derek

Now we open the sedate, learned pages of Nature Methods, a fine journal that specializes in new techniques in molecular and chemical biology. In the August issue, the correspondence section features. . .well, a testy response to a paper that appeared last year in Nature Methods.

“Experimental challenge to a ‘rigorous’ BRET analysis of GPCR oligimerization” is the title. If you don’t know the acronyms, never mind – journals like this have acronyms like leopards have spots. The people doing the complaining, Ali Salahpour and Bernard Masri of Duke, are taking issue with a paper from Oxford by John James, Simon Davis, and co-workers. The original paper described a bioluminescence energy transfer (BRET) method to see if G-protein coupled receptors (GPCRs) were associating with each other on cell surfaces. (GPCRs are hugely important signaling systems and drug targets – think serotonin, dopamine, opiates, adrenaline – and it’s become clear in recent years that they can possibly hook up in various unsuspected combinations on the surfaces of cells in vivo).

Salahpour and Masri take strong exception to the Oxford paper’s self-characterization:

“Although the development of new approaches for BRET analysis is commendable, part of the authors’ methodological approach falls short of being ‘rigorous’. . .Some of the pitfalls of their type-1 and type-2 experiments have already been discussed elsewhere (footnote to another complaint about the same work, which also appeared earlier this year in the same journal - DBL). Here we focus on the type-2 experiments and report experimental data to refute some of the results and conclusions presented by James et al.”

That’s about an 8 out of 10 on the scale of nasty scientific language, translating as “You mean well but are lamentably incompetent.” The only way to ratchet things up further is to accuse someone of bad faith or fraud. I won’t go into the technical details of Salahpour and Masri’s complaints; they have to do with the mechanism of BRET, the effect on it of how much GPCR protein is expressed in the cells being studied, and the way James et al. interpreted their results versus standards. The language of these complaints, though, is openly exasperated, full of wording like “unfortunately”, “It seems unlikely”, “we can assume, at best” “(does) not permit rigorous conclusions to be drawn”, “might be erroneous”, “inappropriate and a misinterpretation”, “This could explain why”, “careful examination also (raises) some concerns”, and so on. After the bandilleros and picadors have done their work in the preceding paragraphs, the communication finishes up with another flash of the sword:

In summary, we agree with James and colleagues that type-2 experiments are useful and informative. . .Unfortunately, the experimental design proposed in James et al. to perform type-2 experiments seems incorrect and cannot be interpreted. . .”

James and Davis don’t take this with a smile, naturally. The journal gave them a space to reply to the criticisms, as is standard practice, and as they did for the earlier criticism. (At least the editors know that people are reading the papers they accept. . .) They take on many of the Salahpour/Masri points, claiming that their refutations were done under completely inappropriate conditions, among other things. And they finish up with a flourish, too:

"As we have emphasized, we were not the first to attempt quantitative analysis of BRET data. Previously, however, resonance energy transfer theory was misinterpreted (for example, ref. 4) or applied incorrectly (for example, ref. 5). (Note - reference 4 is to a paper by the first people to question their paper earlier this year, and reference 5 is to the work of Salahpour himself, a nice touch - DBL). The only truly novel aspect of our experiments is that we verified our particular implementation of the theory by analyzing a set of very well-characterized. . .control proteins. (Note - "as opposed to you people" - DBL). . . .In this context, the technical concerns of Salahpour and Masri do not seem relevant."

It's probably safe to say that the air has not yet been cleared. I'm not enough of a BRET hand to say who's right here, but it looks like we're all going to have some more chances to make up our minds (and to appreciate the invective along the way).

Comments (21) + TrackBacks (0) | Category: Biological News | Drug Assays | The Scientific Literature

September 6, 2007

More Things Than Are Dreamt Of

Email This Entry

Posted by Derek

It’s useful to be reminded every so often of how much you don’t know. There’s a new paper in PNAS that’ll do that for a number of its readers. The authors report a new protein, one of the iron-sulfur binding ones. There are quite a few of these known already, so this wouldn’t be big news by itself. But this one is the first of its kind to be found in the outer mitochondrial membrane, which makes it a bit more interesting.

It also has a very odd structure – well, odd to us humans anyway, for all we know things like this are all over the place and we haven’t stumbled across one until now. There’s a protein fold here which not only has never been seen in the 650 or iron-sulfur proteins with solved structures, it’s never been seen in any protein at all. That’s worth a good publication, for sure.

The part that’ll really throw people, though, is that this protein (named mitoNEET, for the amino acids that make up its weird fold) binds a known drug whose target we all thought we already knew. Actos (pioglitazone) turns out to associate with it, which is a very interesting surprise. We already knew the glitazones as PPAR-gamma ligands. We didn’t understand them as PPAR ligands (no one understands them very well, despite many years and many, many scores of millions of dollars), but that was generally accepted as their site of action.

And now there’s another one, which is going to make the pioglitazone story even more complex. Reading between the lines of the paper, I get the strong impression that the authors were fishing for another pioglitazone binding site, using modified versions of the drug to label proteins, and hit the jackpot with this one. (And good for them - that's a hard technique to get to work). There’s been some speculation that the compound might have effects on mitochondria that wouldn’t necessarily be PPAR-mediated, and this is strong circumstantial evidence for it.

What’s more, I can’t think of any other iron-sulfur proteins that are targets of small molecules. Just last week, I was talking about the diversity of binding sites and interactions that we haven’t explored in medicinal chemistry, and here’s an example for you.

This paper raises a pile of questions: what does mitoNEET do? Shuttle iron-sulfur complexes around? (If so, to where, and to what purpose?) Is it involved in diabetes, or other diseases of metabolism? Does pioglitazone modify its activity in vivo, whatever that activity is? How well does it bind the drug, anyway, and what does the structure of that complex look like? Does Avandia (rosiglitazone) bind, too, and if not, why not? Are there other proteins in this family, and do they also have drug interactions that we don’t know about? Ah, we’ll all be employed forever in this business, for as long as people can stand it.

Comments (3) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity

July 17, 2007

Visfatin: Real Or Not?

Email This Entry

Posted by Derek

A commentor to my Proteomics 101 post the other day brought up an important point: that before you can have a chance to figure out what a protein is doing, you have to know that it exists. Finding the darn things is no small job, since you're digging through piles of chemically similar stuff to unearth them. What's more, we can't just ignore 'em: some of the low-concentration proteins are also correspondingly important and powerful.

Nasty arguments can erupt over whether a given protein and its proposed functions even exist. Crockery is flying over one of those right now, an insulin-like protein hormone dubbed "visfatin" by its discoverers in Osaka a couple of years ago. Well, in this case the protein probably exists, but does it do what it's advertised to do? An insulin mimic secreted by fat cells would be worth knowing about, but there doesn't seem to be enough of it present in the blood to do much of anything, given how well it binds to its putative targets. There are also reports that some of that data in the Osaka paper are hard to reproduce.

Complicating things even more is the (apparently well-founded) contention that visfatin is a re-discovery of a protein already known as PBEF, which is identical to another protein named Nampt. (Each "discovering" group assigned their own name, a situation that happens so often in biology that people don't even notice it any more).

The whipped topping on the whole thing is a accusation of misconduct by someone in Japan, which led to an investigation by Osaka University, which has now recommended that the original paper be retracted. Its lead author, Iichiro Shimomura, does not agree, as you might well imagine. The points of contention are many: whether the misconduct was real at all, or whether it describes real events that don't rise to the level of misconduct, or whether the conclusions of the paper are invalidated or not by them, and so on.

An early solution appears unlikely. And we still don't know what exactly visfatin/PBEF/Nampt is doing. Next time you wonder how things are going over in the proteome, consider this one.

Comments (4) + TrackBacks (0) | Category: Biological News | Diabetes and Obesity | The Dark Side

November 6, 2006

It Went Up Instead of Down

Email This Entry

Posted by Derek

One of the things I like most about science is that you really don't know what's going to happen next. That's especially true in the areas where things have just barely settled down. Before that, when a field is new, no one knows what to expect, so in a way there aren't really any surprising results: everything's a surprise. A much more settled area, by contrast, is far less likely to produce surprises, although when one shows up it really stands out. But a field where people are just starting to exhale and think that maybe they've finally figured out what's going on - that has the best combination of high contrast and a real likelihood for craziness.

Here's a perfect example, since I was just expressing some doubts about the immediate commercial potentials of RNA interference the other day. In a paper coming out in PNAS, a group at UCSF was investigating the use of some small double-stranded RNAs, just the sort of thing that can be used for RNAi experiments. But they found (to their great surprise) that their experiments were stimulating the transcription of their targeted genes, rather than shutting them down. Needless to say, this was not what anyone expected, and I'll bet the folks involved repeated these things many, many times before they could trust their own eyes. There are plenty of other people who won't believe it until they've seen it with theirs.

On a molecular biology level, it's hard to say just what's going on. The authors, according to this news item from Science (probably subscriber-only) say that they've found some rules about which genes will be susceptible to the technique and which won't, which will be released soon. (Translation: as soon as they can be reasonably sure that they won't make fools of themselves - this paper took enough nerve as it is).

The Science article includes a good deal of if-this-holds-up language, which is appropriate for such a weird discovery. (Are the editors there wondering why they didn't get a chance to publish the article themselves, or did they have the chance and turn it down?) At any rate, if-it-holds-up this effect will simultaneously complicate the RNAi field a great deal (it was gnarly enough already, thanks) and also open a door to some really unusual experiments. Upregulating genes isn't very easy, and there are no doubt many ideas that have been waiting on a way to do it. There are therapeutic possibilities, too, naturally - but they'll have to wait on the same difficulties as the other RNA therapies.

Anyway, I'm happy to see this. It opens up some completely new biology, and it opens a door to a potential Nobel for the discoverers should everything work out. And it always cheers me up when something totally unexpected flies down like this and lands on the lawn.

Comments (0) + TrackBacks (0) | Category: Biological News

October 18, 2006

Peptides as Texts

Email This Entry

Posted by Derek

There's a curious paper (subscriber-only link) in the latest Nature that's getting some attention, titled "A linguistic model for the rational design of antimicrobial peptides". For non-subscribers, here's a synopsis of the work from the magazine's news site.

A group at MIT headed by Gregory Stephanopolous has been studying various antimicrobial peptides, which are secreted by all kinds of organisms as antibiotics. Taking the amino acid sequences of several hundred of these and feeding them into a linguistic pattern-analysing program suggested some common features, which they then used to synthesize 42 new unnatural candidates. The hit rate for these was about 50%, which is far, far more than you'd expect if you weren't tuning in to some sort of useful rules.

It's the concept of "peptide grammar" that seems to be the news hook here. But I'm quite puzzled by all the fuss, because looking for homology among protein sequences is one of the basic bioinformatics tools. I have to wonder what the MIT group found with their linguistics program that they wouldn't have found with biology software. What they're doing is good old structure-activity relationship work, the lifeblood of every medicinal chemist. Well, it's perhaps better described as sequence-activity relationships, but sequence is just a code for structure. There's nothing here that any drug company's bioinformatics people wouldn't be able to do for you, as far as I can see.

So why haven't they? Well, despite the article's mention of a potential 50,000 further peptides of this type, the reason is probably because not many people care. After all, we're talking about small peptides here, of the sort that are typically just awful candidates for real-world drugs. And I'm not just babbling theory here - many people have actually tried for many years now to commercialize various antimicrobial peptides and landed flat on their faces.

You won't see a mention of that history in the Nature news story, unfortunately. They do, to their credit, mention (albeit in the fourth paragraph from the end) that peptides are troublesome development candidates. That's where it also says that there are reports that bacteria can become resistant even to these proteins, which prompts me to remind everyone that bacteria can become resistant to everything short of freshly extruded magma. It's in the very last paragraph of the story, though, that Robert Hancock of UBC in Vancouver says just what I was thinking when I started reading:

(Hancock) questions how different the linguistics technique is from other computational methods used to find similarities between protein sequences. "What's new is the catchy title," he says.

Comments (10) + TrackBacks (0) | Category: Biological News | Drug Development | Infectious Diseases

March 14, 2006

Neowater Replies

Email This Entry

Posted by Derek

I received (some time ago) an answer from Miguel Cizin and the folks at Docoop, makers of Neowater. (If you haven't seen the first parts of this story, they're here and here). In that last post, I had a number of look-under-the-hood physical chemistry questions about the stuff, in an attempt to figure out if there's anything to it or not. Here they are in order, with the provided answers:

1. How much of Neowater's characteristics can be explained under the usual framework of colligative properties? That is, by how much is the boiling point of Neowater elevated, and by how much is its freezing point depressed?

The company provided some differential scanning and isothermal titration calorimetry data in response to this, which I appreciate. I'm no expert in this area, but to my eye the ITC plots look broadly similar, but with a noticeably longer half-life to thermal equilibrium in the Neowater runs. (It's not noted what substance was being injected in these experiments).

2. Similarly, what's its vapor pressure at STP? Does it show a negative deviation from Raoult's Law (as you'd expect from the descriptions in the patent of Neowater's structure), and is this deviation much greater than expected given the low levels of particulate matter contained? The literature on the DoCoop web site, I should note, mentions that Neowater evaporates more slowly than regular water.

DoCoop replies: "Neowater indeed evaporates more slowly than regular water, since the water molecules are less available as they are attracted to the charged nanoparticles, hence it takes more energy to dislodge them. The difference in the vapor pressure is one of the mechanisms of action that we use to alter the dynamics of reactions to benefit our customers. You are right, there is a difference in the vapor pressure indeed. We will not enter here into the metrics or actual values, since it is proprietary for use by customers, so we focused our answer on the claim itself only, rather than the detail, and hope you understand us." This isn't as complete an answer as I'd wish for - in fact, it doesn't add anything at all to what we've already been told, and I have a hard time believing that a deviation from Raoult's Law is proprietary information. But we'll let that go for now.

3. In the same vein, what's the surface tension of Neowater as compared to the water it's produced from? I could imagine it going either way - if large clusters of water are occupied around the nanoparticles, the surface layer of water may not form in as ordered a fashion, leading to lower surface tension. On the other hand, if Neowater is better thought of as a collection of larger polar "balls" of hydrated particles, perhaps the value could end up higher.

Answer: "Exactly as you stated above. This is another mechanism of action in Neowater that we use to the benefit of our customers for the enhancement of their reaction. In Neowater, the dynamic range of surface tension is +15% to -15% around 72 dyn."

Actually, I think that should be dyn/cm, and that value is smack on top of the normal values for water (between 72 and 73). We're left to wonder what could cause it to vary higher and lower, though, and to wonder which of my explanations were correct. The DoCoop website has a picture of the stuff on a hydrophilic surface, showing a higher surface tension. I should note that if you want lower values, a drop of detergent will do the job nicely.

4. What's the conductivity of Neowater as compared to its untreated form? How does it change in the presence of small amounts of electrolytes as compared to regular water?

Answer: "Neowater's conductivity is like that in RO or distilled water. Neowater has no ions. It will change if (they're added). We are in the process of starting a research project with a NJ-based University on this application for batteries."

5. Have the rates of standard nucleophilic displacement reactions and/or cycloadditions been measured in Neowater? The presence or absence of a polar transition state and the resultant effect on reaction rate would make an interesting test of its properties. (Neowater is stated to be a "more hydrophobic" form of the liquid). Which reminds me: have Neowater's dipole moment and dielectric constant been determined?

Answer: " Neowater is an irregular media from the point of view of nucleophillic and cycloadditions. We did not find the right method to characterize this irregularity. We are open to suggestions because one of our business opportunities is in crystallization of proteins, where this issue is central. We do see irregularities of the nucleophilic behavior in Neowater with our university partners that are developing this application at the Weizman Institute in Israel. Regarding the dielectric constant measurement, there is a change in it in Neowater vs. regular water. We could not conclude yet the correlation b/w the shift in the structure of the "spinnor network" within Neowater if this is what you are trying to understand."

I would think that if you have a system that shows that Neowater is an "irregular" medium, then you'd have a method to begin characterizing it right there. But I'll wait to see if something comes out of the Weizmann work. For cycloadditions, I'd suggest looking at some of the aqueous Diels-Alder work from the 1980s.

And as for my question #6, about whether deuterated Neowater had been prepared, the company indicates that it hasn't done anything in that direction yet, although they are looking into the idea of using Neowater as an MRI contrast agent.

So, where does this leave us? While I appreciate the company taking time to answer my queries, I can't say that I'm all that much more informed compared to what I'd been able to find out from their press releases. That's clearly the way that they'd like to keep it, which is naturally their right from a business standpoint.

But from the scientific end, I have trouble buying into this "It's all proprietary for the use of our customers and the enhancement of shareholder value" explanation. Because if Neowater were really the sort of breakthrough that DoCoop's material makes it sound like, it would be worth a slew of research papers which would give it more scientific credibility. And since the company has already worked to secure its patent rights, such papers would certainly be feasible - desirable, even, considering the publicity that would follow.

And besides, if you want to know about the effects of nanoparticles in water, you can turn to the people who actually do publish their results. Perhaps any rate enhancement in PCR runs with Neowater is due to enhanced thermal conductivity - after all, temperature cycling is an essential part of the technique. How did I come to this conclusion? By reading this paper on the effects of aqueous nanoparticles on PCR reactions. It's a perfectly reasonable paper, and contains, as far as I can see, more data than DoCoop has ever released.

While we're on that subject, here's a site that will tell you so much about the effect of nanoparticles on thermal conductivity that you'll wish you'd never asked. Similarly, if you'd like to know more about the effect that nanoparticles have on water's surface tension, you could go here. If you wanted to learn more about the properties of water confined to nanoscale environments, you'd get a lot more out of this guy or this one than you would out of DoCoop's literature and patent filings, not that that would be very difficult.

So, all in all, I continue to be not very impressed. If Neowater were the kind of wild breakthrough that the company claims it to be, it would be worth more than its current use as a sort of STP-oil-treatment for PCR reactions. The company can, of course, have the last laugh on me over the next few years, and I wish them luck in doing so. But I'm betting that any breakthroughs in the aqueous nanoparticle area will find their way into the scientific literature in a more convincing fashion.

Comments (10) + TrackBacks (0) | Category: Biological News

February 2, 2006

Nanotech Wonder Water?

Email This Entry

Posted by Derek

Genetic Engineering News is sort of an odd publication. Primarily a vehicle for big, glossy color ads, it publishes some articles of its own: guest editorials, roundups of news from conferences and trade shows, that sort of thing. And it also publishes plenty of things that are (that have to be) slightly rewritten press releases - the sort of articles that start off:

"InterCap Corp. and SynaDynaGen say that their research collaboration on biosecurity proteomics through RNA interference and four-dimensional mass spectrometry, now with the great taste of fish, is yielding results that will make customers roll over on their backs and pant. Speaking at the Weaseltech Investor's Conference, company spokescreatures vowed to. . ."

One of these in the December issue, though, is weird enough that you can hear the editorial staff wrestling with their better selves. Phrases like "The company claims. . ." and "Company spokesmen maintain. . ." keep running through the whole article. It's titled "Water-Based Nanotech for the Life Sciences", and profiles a small Israeli company called (oddly) DoCoop. What DoCoop is selling is water.

But not just any water. . .Neowater! (Trademarked, natch). This is "a stable system of highly hydrated, inert nanoparticles", which supposedly have thousands of ordered hydration shells around them. This, the company says, modifies the bulk properties of the water. And what does that buy you?

Well, according to the company (there, I'm doing it, too), it will do pretty much everything except change the cat's litter box for you. It makes reactions run faster, at lower concentrations. It improves all biochemical assays and molecular biology techniques - PCR, RNA interference, ELISAs, you name it. Brief mentions are made of delivering molecules directly into cells with the stuff. It has applications in diagnostic kits, in drug delivery, in protein purification, and Cthulu only knows what else.

Some of these claims would seem to directly clash with each other. In the space of a few paragraphs, we hear that Neowater behaves "like a strong detergent", but somehow accelerates the growth of bacteria in culture. But at the same time it also prevents the formation of biofilms. And it increases the potency of antibiotics against bacteria, too. How it manages to do these things simultaneously is left, apparently, as an exercise for the reader.

The company claims that it has plenty of customers, and that it's working with several pharmaceutical companies to develop some of these applications. A search through the literature turned up one European molecular biology paper that mentioned using their PCR enhancing kit, so they've sold some Neowater for sure. But I'd like to turn this one over to the readers: have any of you seen this stuff? Know anyone who uses it?

And is everyone else's crank radar pinging as loudly as mine is? The thing is, unless a superior variety has up and evolved on us, cranks don't usually go out and form their own molecular biology reagent companies and place press releases in Genetic Engineering News. I'm profoundly sceptical of the claims this company makes, but I have the feeling that they're sincere in making them. Very odd, very odd indeed.

Comments (31) + TrackBacks (0) | Category: Biological News

January 9, 2006

Stem Cell Disaster

Email This Entry

Posted by Derek

Update: Since the site was down most of Tuesday, I'm leaving this post up another day. Things have only worsened since I put it up, though. . .

I've been withholding my comments on the South Korean stem cell controversy, waiting to see how the story finally settled out. Well, it's good and settled now: the entire enterprise was a fraud. Here's a timeline of the whole sorry business, for people who need a recap. Start at the bottom of that page to experience it in the most painfully realistic way.

My first impulse, in the manner of anyone belonging to a group (biomedical researchers) whose reputations have been dented by such a case, is to point out that, yes, "the system worked". The fraudulent research was discovered and rooted out, papers were retracted, funding lost, brows slapped, all of it. And it hasn't taken that long, either. It's useful to point these things out to people who would like to throw mud on the whole enterprise of science.

See, for example, this blog review of a recent book on scientific fraud. Contrary to its repeated assertions, scientists do indeed realize that fraud happens, because every working scientist has seen it. For starters, most large academic departments have tales of grad students or post-docs whose work could never be trusted. And all of us in research have run into papers in the literature whose techniques won't reproduce, no matter what, and the suspicions naturally grow that they were the product of exaggeration or wishful thinking. The number of possible publications sins alone is huge: yields of chemical reactions get padded, claims of novelty and generality get inflated, invalidating research from other labs doesn't get cited.

It's painful for me to admit it, but this kind of thing goes on all the time. And as long as the labs are staffed with humans, we're not going to be able to get rid of it. The best we can do is discourage it and correct it when we can.

But takes me to the second standard impulse that strikes in these situations, which is to ask what in the world these people were thinking. That's what's always puzzled me about major scientific fraud. The more interesting your work is, the more fame you stand to gain from your results, the more certain you are to be found out if you fake it. There are obscure areas that you could forge and fake around in for years, and journals in which you could publish your phony results without anyone ever being the wiser. Of course, by definition those won't do you much good - heck, you might as well do real work by that point.

But faking the big ones, the worldwide-headline national-hero stuff - you can't get away with that for long, and Professor Hwang didn't. The closest parallels I can think of are the recent Jan Hendrik Schoen case and the thirty-year-old Summerlin mouse scandal. (These and several other infamous cases are summarized here and here. I honestly find it hard to believe that there are others of that magnitude that anyone got away with.

I've never been able to imagine the state of mind of someone involved in this kind of thing. There you are, famous for something you've completely made up. In front of you are the cameras and reporters, while behind you, off in the distance, are hundreds of other scientists around the world busily trying to reproduce your amazing results. Every minute, they get closer to finding you out. How can anyone smile for the television crews under such conditions?

It's tempting to speculate about the state of the Korean scientific establishment and the role of Korea culture itself in this latest blowup. But such things have happened everywhere. The Korean factor certainly led to Hwang being an instant national figure with his face on every magazine and a dozen microphones trained on him wherever he turned. But it's not a Korean failing that did him in, it's a human one.

Comments (13) + TrackBacks (1) | Category: Biological News

December 6, 2005

Grand Rounds Today, and Next Week

Email This Entry

Posted by Derek

The medical-blog roundup known as Grand Rounds is up today at Dr. Charles, with a wide selection of good reading.

And this is a good time to announce that I'm going to be hosting the next installment a week from now. Please feel free to send along links to any good blog posts on medical topics - your own, or ones you've come across when you're supposed to be working.

Comments (0) + TrackBacks (0) | Category: Biological News

November 17, 2004

RNAi: The Awkward Age

Email This Entry

Posted by Derek

A notable feature of 21st century molecular biology (so far!) is the emphasis on RNA. I've written before about RNA interference, a hugely popular (and hugely researched) way to silence the expression of proteins in living cells. Wide swaths of academia and industry are now devoted to figuring out all the details of these pathways, key parts of which are built into the cellular machinery. They turn out to regulate gene expression in ways that weren't even thought of before the late 1990s, and I've said for several years now that this field is the most obvious handful of tickets to Stockholm that I've ever seen. (Naturally, there are some worries that the whole field has perhaps been a bit over-promoted. . .)

Shutting off the production of targeted proteins is a wonderful thing, both from the basic research viewpoint and the clinical one. The more control you can have over the process, the better, and RNAi has been extremely promising. But as we're learning more about the system, complications are creeping in. Don't they always. . .

It turns out that the small interfering RNAs that are used, and are supposed to be the most efficacious and the most specific, aren't always what they seem. A disturbing recent study used one targeting luciferase, a firefly protein with no close relatives in the human genome. But applying it to the human-derived HeLa cell line showed effects on over 1800 genes - some of which only showed up at high concentrations, true, but none of these would have shown up at all in the ideal world we might have been living in for a while. There have also been experiments with RNAs that have deliberately made with slight mismatches for their intended target, and some of them work rather too well.

Finally, as I mentioned about a year ago, there are reports that these small RNAs can set off an interferon response, suggesting that the technique can cause cells to respond as if they're under infectious attack. As you'd imagine, this can also complicate the interpretation of an experiment, especially if you're already targeting something that might interact with any of these pathways (and plenty of things do.)

None of these yellow flags are particularly large, but there are several of them now and probably more waiting to be noticed. (A good brief roundup of the situation can be found in the November issue of Trends in Genetics, for those with access.) Perhaps as we learn more we'll find ways to obviate these problems. If there's one thing for sure, it's that we haven't figured out all the tricks that RNA is capable of. But the companies that are racing to get RNAi therapies into the clinic are watching all this a bit nervously, hoping that they're not going to be those fools that you always hear about rushing in.

Comments (3) + TrackBacks (0) | Category: Biological News

August 17, 2004

Kinases and Their Komplications

Email This Entry

Posted by Derek

I'm going to take off from another comment, this one from Ron, who asks (in reference to the post two days ago): "would it not be fair to say that cellular biochemistry gets even more complicated the more we learn about it?

It would indeed be fair. I think that as a scientific field matures it goes through several stages. Brute-force collection of facts and observations comes early on, as you'd figure. Then the theorizing starts, with better and better theories being honed by more targeted experiments. This phase can be mighty lengthy, depending on the depth of the field and the number of outstanding problems it contains. A zillion inconsistent semi-trivialities can take a long time to sort out (think of the mathematical proof of the Four-Color Theorem), as can a smaller number of profound headscratchers (like, say, a reconciliation of quantum mechanics with relativity as they deal with gravity.)

If the general principles discovered are powerful enough, things can get simpler to understand. Think of the host of problems that early 20th-century physics had, many of which resolved themselves as applications of quantum mechanics. Earlier, chemistry went through something similar earlier, on a smaller scale, with the adoption of the stereochemical principles of van't Hoff. Suddenly, what seemed to be several separate problems turned out to be facets of one explanation: that atoms had regular three-dimensional patterns of bonding to other atoms. (If that sounds too obvious for such emphasis, keep in mind that this notion was fiercely ridiculed at resisted at the time.)

Cell biology is up to its pith helmet in hypotheses, and is nowhere near out of the swamps of fact collection. As in all molecular biology, the sheer number of different systems is making for a real fiesta. Your average cell is a morass of interlocking positive and negative feedback loops, many of which only show up fleetingly, under certain conditions, and in very defined locations. Some general principles have been established, but the number of things that have to be dealt with is still increasing, and I'm not sure when it's going to level out.

For example, the other day a group at Sugen (now Pfizer) published a paper establishing just how many genes there are in mice that code for protein kinase enzymes. Through adding phosphoryl groups, these enzymes are extremely important actors in the activation, transport, and modulation of the activities of thousands upon thousands of other proteins, and it turns out that there are exactly 540 of them. (Doubtless there are some variations as they get turned into proteins, but that's how many genes there are.) And that's that.

Now, that earlier discovery of protein phosphorylation as a signaling mechanism was a huge advance, and it has been appropriately rewarded. And knowing just how many different kinase enzymes there are is a step forward, too. But figuring out all the proteins they interact with, and when, and where, and what happens when they do - well, that's first cousin to hard work.

Comments (0) + TrackBacks (0) | Category: Biological News | In Silico

April 22, 2004

The Vapor Trail I Referred To

Email This Entry

Posted by Derek

I mentioned the other day that not everything in that Stuart Schreiber interview sounded sane to me, (although more of it does than I'd expected). The interviewer, Joanna Owens, asks him to expand on a