Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

July 31, 2013

Evolving Enzymes: Let 'Em Rip

Email This Entry

Posted by Derek

Evolutionary and genetic processes fascinate many organic chemists, and with good reason. They've provided us with the greatest set of chemical catalysts we know of: enzymes, which are a working example of molecular-level nanotechnology, right in front of us. A billion years of random tinkering have accomplished a great deal, but (being human) we look at the results and wonder if we couldn't do things a bit differently, with other aims in mind than "survive or die".

This has been a big field over the years, and it's getting bigger all the time. There are companies out there that will try to evolve enzymes for you (here's one of the most famous examples), and many academic labs have tried their hands at it as well. The two main routes are random mutations and structure-based directed changes - and at this point, I think it's safe to say that any successful directed-enzyme project has to take advantage of both. There can be just too many possible changes to let random mutations do all the work for you (20 to the Xth power gets out of hand pretty quickly, and that's just the natural amino acids), and we're usually not smart enough to step in and purposefully tweak things for the better every time.

Here's a new paper that illustrates why the field is so interesting, and so tricky. The team (a collaboration between the University of Washington and the ETH in Zürich) has been trying to design a better retro-aldolase enzyme, with earlier results reported here. That was already quite an advance (15,000x rate enhancement over background), but that's still nowhere near natural enzymes of this class. So they took that species as a starting point and did more random mutations around the active site, with rounds of screening in between, which is how we mere humans have to exert selection pressure. This gave a new variant with another lysine in the active site, which some aldolases have already. Further mutational rounds (error-prone PCR and DNA shuffling) and screening let to a further variant that was over 4000x faster than the original enzyme.

But when the team obtained X-ray structures of this enzyme in complex with an inhibitor, they got a surprise. The active site, which had already changed around quite a bit with the addition of that extra lysine, was now a completely different place. A new substrate-binding pocket had formed, and the new lysine was now the catalytic residue all by itself. The paper proposes that the mechanistic competition between the possible active-site residues was a key factor, and they theorize that many natural enzymes may have evolved through similar paths. But given this, there are other questions:

The dramatic changes observed during RA95 evolution naturally prompt the question of whether generation of a highly active retro-aldolase required a computational design step. Whereas productive evolutionary trajectories might have been initiated from random libraries, recent experiments with the same scaffold dem- onstrate that chemical instruction conferred by computation greatly increases the probability of identifying catalysts. Although the programmed mechanisms of other computationally designed enzymes have been generally reinforced and refined by directed evolution, the molecular acrobatics observed with RA95 attest to the functional leaps that unanticipated, innovative mutations—here, replacement of Thr83 by lysine—can initiate.

So they're not ready to turn off the software just yet. But you have to wonder - if there were some way to run the random-mutation process more quickly, and reduce the time and effort of the mutation/screening/selection loop, computational design might well end up playing a much smaller role. (See here for more thoughts on this). Enzymes are capable of things that we would never think of ourselves, and we should always give them the chance to surprise us when we can.

Comments (14) + TrackBacks (0) | Category: Chemical Biology | In Silico

July 30, 2013

Apparently, Ads Make Antihistamines Work Better

Email This Entry

Posted by Derek

This PNAS paper's title certainly caught my attention: "Advertisements impact the physiological efficacy of a branded drug". The authors, from the University of Chicago, are digging into the business end of the placebo effect. After giving a set of subjects a skin-test panel to common allergans, here's what happened:

We conducted two randomized clinical trials to measure the impact of direct-to-consumer advertising on the objective, physiological effect of Claritin (Merck & Co.), a leading antihistamine drug. A pilot study assessed the efficacy of Claritin across subjects exposed to advertisements for Claritin, advertisements for Zyrtec (McNeil), or control advertisements. . .Among subjects with allergies, the efficacy was the same across the three advertisement conditions, but among subjects without allergies, efficacy was significantly greater in the Claritin advertisements condition than in the Zyrtec advertisements condition.

The heterogeneity of the treatment effect based on the allergy status was discovered only ex post facto, so we conducted a follow- up trial to replicate these initial findings. To maximize statistical power, the follow-up trial used a larger sample, assigned subjects only to Claritin advertisements or Zyrtec advertisements, and block-randomized subjects based on their allergy status. In ad- dition, we elicited subjects’ beliefs about the efficacy of Claritin to examine whether any difference in impact of the advertisements across the two subpopulations is driven by the relative malleability of their beliefs. . .

This reminds me of the various experiences that people have had with blind taste testing of wines. In the follow-up trial, they used a histamine challenge in the skin test, which will give a red reaction no matter what you're allergic to. The effect repeated:

In the subpopulation without allergies, we find that the efficacy of Claritin at 120 min is substantially higher for subjects who were exposed to Claritin advertisements. Claritin advertisements have no significant impact on efficacy 60 min after the drug is taken. This pattern is consistent with the observed changes in the subjects’ beliefs. Exposure to Claritin advertisements in this subpopulation greatly increases the belief in the efficacy of Claritin. At the same time, the realized efficacy of Claritin at 120 min (but not at 60 min) is strongly correlated with the change in beliefs.
In the subpopulation with allergies, we find no relationship between exposure to Claritin advertisements and the change in beliefs. Moreover, the advertisements have no impact on the efficacy of Claritin at 120 min. We do find a curious negative impact of Claritin advertisements on Claritin’s efficacy at 60 min in this subpopulation, but this effect cannot be mediated by the (nonexistent) impact of advertisements on beliefs.

Oh, boy. I truly wonder why this experiment hasn't been run before, but look for a lot of follow-ups now that it's out. As the authors themselves detail, there are several unanswered questions that could be addressed: does seeing the Claritin advertisements make the Claritin work better, or does seeing the Zyrtec ads make Claritin work more poorly? Why does this seem to work only in people without specific allergies in the first place? What's the physiological pathway at work here, in any case?

Here's the big one: does direct-to-consumer advertising actually increase the efficacy of the drugs it advertises? That is, does the effect shown in this experiment translate to real-world conditions? For how many compounds is this the case, and in what therapeutic classes is the effect most likely to occur? Is there an actual economic or public health benefit to this effect, should it prove to be robust? If so, how large is compared to the money spent on the advertising itself? And if people internalize the idea that advertisements make a drug work better, will advertisements continue to do that at all?

Comments (20) + TrackBacks (0) | Category: Business and Markets | Clinical Trials

July 29, 2013

Pfizer Rearranges

Email This Entry

Posted by Derek

Ian Read at Pfizer has announced that the company will be divided into three parts. Here's the press release, and let's see what sense we can make of it:

Today, Pfizer is announcing plans to move forward to internally separate its commercial operations into three business segments, two of which will include Innovative business lines and a third which will include the Value business line. . .

One of the Innovative business segments. . .will generally include products across multiple therapeutic areas that are expected to have market exclusivity beyond 2015. The therapeutic areas include Inflammation and Immunology, CV/Metabolic, Neuroscience and Pain, Rare Diseases and Women's/Men's Health. . .the other Innovative business segment will include Vaccines, Oncology and Consumer Healthcare. . .The Value business segment. . .will include products that generate strong, consistent cash flow, and will be positioned to provide patients access to effective, lower-cost, high-value treatments. In addition to products that have lost market exclusivity, it will generally include mature, patent-protected products that are expected to lose exclusivity through 2015 in most major markets, biosimilars and current and future established products collaborations. . .

I'm not at all sure that I understand this yet. I can see why the "Value" business segment exists separately, although I think that it's unfortunately named. (You can either get the impression that the other two don't have value, or make the connection with "cheap/generic" as in some store's "Value Line" of products). But I'm not getting the distinction between the other two so well. It's not broken down by biologics/small molecules, or by specialty marketing/wider market, not from what I can see. And putting vaccines, oncology, and consumer health into one bunch sounds like a random draw of tiles out of a bag.

No doubt there will be many, many more explanations to come, and I look forward to seeing how many of them are coherent. For now, it looks like more uncertainly and disruption, which is not quite what Pfizer seems to need.

Note: for those of you wondering where the obvious Latin joke is, Chemjobber already got the Julius Caesar quote off on Twitter!

Comments (49) + TrackBacks (0) | Category: Business and Markets

More Whitesides on Ligand Binding

Email This Entry

Posted by Derek

George Whitesides and his lab have another paper out on the details of how ligands bind to proteins. They're still using the favorite model enzyme of all time (carbonic anhydrase), the fruit fly and nematode of the protein world. Last time around, using a series of ligands and their analogs with an extra phenyl in their structure. The benzo-ligands had increased affinity, and this seemed to be mostly an enthalpic effect. After a good deal of calorimetry, etc., they concluded that the balancing act between enthalpy and entropy they saw over the group was different for ligand binding than it was for logP partitioning, and that means that it doesn't really match up with the accepted definition of a "hydrophobic effect".

In this study, they're looking at fluorinated analogs of the same compounds to see what that might do to the binding process. That makes the whole thing interesting for a medicinal chemist, because we make an awful lot of fluorinated analogs. You can start some interesting discussions about whether these are more hydrophobic than their non-F analogs, though, and this paper lands right in the middle of that issue.

The first result was that the fluorinated analogs bound to the enzyme (in their X-ray structures) with almost identical geometry. That makes the rest of the discussion easier to draw conclusions from (and more relevant). It's worth remembering, though, that very small changes can still add up. There was a bit of a shift in the binding pocket, actually, which they attribute to an unfavorable interaction between the fluorines and the carbonyl of a threonine residue. But the carbonic anhydrase pocket is pretty accomodating - the overall affinity of the compounds did not really change. That led to this conclusion:

Values of DG8bind, combined with an overall conserved binding geometry of each set of benzo- and fluorobenzo-extended ligands suggest that binding depends on a fine balance of interactions between HCA, the ligand, and molecules of water filling the pocket and surrounding the ligand, and that a simple analysis of interactions between the protein and ligand (Figure1E) is insufficient to understand (or more importantly, predict) the free energy of binding.

But although the overall free energy didn't change, the enthalpic and entropic components did (but arrived at the same place, another example to add to the long list of systems that do this). The differences seem to be in the Coulombic interaction with the binding pocket (worse enthalpy term - is that what shifted the structure over a bit in the X-ray?) and changes in energy of solvation as the ligand binds (better entropy term). Matched pairs of compounds didn't really show a difference in how many waters they displaced from the binding site.

So the take-home is that the hydrophobic effect is not all about releasing waters from protein binding surfaces, as has been proposed by some. It's a mixture of stuff, and especially depends on the structure of the water in the binding pocket and around the ligands, and the changes in these as the compounds leave bulk solvent and find their way into the binding site.

That makes things tricky for many compounds. Hydrophobic effects seem to be a big part of the binding energy of a lot of drug molecules (despite various efforts to cut back on this), and these Whitesides studies would say that modeling and predicting these energetic changes are going to be hard. Computationally, we'd have an easier time figuring out direct interactions between the protein and the ligand, the way we do with enthalpic interactions like hydrogen bonds. Keeping track of all those water molecules is more painful - but necessary.

Comments (10) + TrackBacks (0) | Category: Drug Assays | In Silico

July 26, 2013

Instrument Nostalgia

Email This Entry

Posted by Derek

Andre the Chemist is talking Lab Instrument Nostalgia at his blog. I know what he means, but mostly, when I think of old equipment, I'm just glad that I'm not using it any more. I remember, for example, the JEOL NMR machines with the blue screen and light pen, and a water-cooled 80MHZ NMR made by IBM, of all people. But if I saw either of them today, I would react with a sort of interested horror.

Update: a little searching around brought me this picture of the IBM machine. Check out the cool 1980 tech!

Comments (60) + TrackBacks (0) | Category: Life in the Drug Labs

How Rapamycin Extends Lifespan: Not By Slowing Down Aging

Email This Entry

Posted by Derek

A few years ago, there came the interesting news that rapamycin looked as if it prolonged lifespan in mice. That result is robust; it's been replicated. Now a large multicenter effort in Germany has looked closely at this effect, and they have many more details about what's going on.

The big question is: does rapamycin extend lifespan through some general effect on aging, or does it work through a non-aging mechanism (by perhaps suppressing tumor formation)? Now, many people wouldn't find that much of a distinction - would you like a drug that makes you age more slowly, or would you like one that keeps you from getting cancer? The answer would probably be "Yes". But it's a question that very much matters biochemically.

And it turns out that it's the latter. This new paper does a very careful examination of many phenotypes of aging, on both whole-animal and tissue levels, and finds that rapamycin treatment does not really seem to affect age-related changes. What changes they did see on rapamycin treatment were also present in young mice as well as older ones, making them less likely to be an underlying cause of the effect. They now believe that the compound's effect on lifespan is entirely, or almost entirely, due to the lower rate of fatal neoplasms.

Comments (8) + TrackBacks (0) | Category: Aging and Lifespan

More Behind-the-Scenes Maneuvering. How Wonderful.

Email This Entry

Posted by Derek

This is exactly the kind of headline the drug industry does not need. Via FierceBiotech, here's a story in The Guardian on the recent efforts to get companies to disclose more about the clinical trial results for investigational drugs. GSK is the company that seems to have done the most in this regard, but the European Medicines Agency (EMA) is proposing mandatory disclosure of trial results into a public database. That's a lot further than most companies are willing to go - so what to do?

The strategy was drawn up by two large trade groups, the Pharmaceutical Research and Manufacturers of America (PhRMA) and the European Federation of Pharmaceutical Industries and Associations (EFPIA), and outlined in a memo to senior industry figures this month, according to an email seen by the Guardian.

The memo, from Richard Bergström, director general of EFPIA, went to directors and legal counsel at Roche, Merck, Pfizer, GSK, AstraZeneca, Eli Lilly, Novartis and many smaller companies. It was leaked by a drugs company employee.

The email describes a four-pronged campaign that starts with "mobilising patient groups to express concern about the risk to public health by non-scientific re-use of data". Translated, that means patient groups go into bat for the industry by raising fears that if full results from drug trials are published, the information might be misinterpreted and cause a health scare.

That's what. Other parts of the strategy include "discussions with scientific associations" about the risks of data sharing and getting other companies in other industries that might be affected by similar proposals to lobby against this as well. None of this is to be done, it seems, under the banner of "Here's why the drug industry opposes this idea". It's all a spontaneous upwelling.

Now, I don't want to seem too shocked: this sort of thing is done all the time in politics. Every time some big regulatory or legislative idea comes along that might cramp some large group's style, you'll see all kinds of organizations pop up with serious-sounding names: "Public Coalition For XYZ" "United Citizens For QRS" and so on. Use of these "instant grassroots" fronts has earned the term "astroturfing" (which also means that any time some actual group of people comes together for real, their political opponents will always accuse them of being an astroturfed gang of shills).

Some of the patient advocacy groups the Guardian talks about are probably in this category. But many of them are real organizations that have been around for some time. There's an evolutionary dance going on, though: while the advocacy groups want to get enough influence with the drug companies to steer their decisions, the drug companies want to get enough influence with the advocacy groups to steer theirs, for just the reasons we're seeing now. And in that second half of the process, the pharma industry has a powerful offer to make: we'll fund you. At that point, every advocacy group (in any industry) has some big decisions to make about what they're trying to do and how best to do it. Will taking the money compromise them? Or will that be outweighed by what they can do with the funding?

But just because this is a common practice doesn't mean that it's right. Or a good idea. Or, at the very least, the sort of thing that the industry should be seen to be doing. Secret memos detailing a behind-the-scenes campaign of influence to avoid disclosing data? The people at PhRMA and EFPIA should apply a simple test to ideas like this: if it sounds like a bad movie plot, if it sounds like something made up by people who hate you. . .maybe it's not such a good plan.

Update: here's more on an effort to pull out unpublished clinical trial data. "Publish or be published" is their motto. The editors of the British Medical Journal and PLoS Medicine have endorsed the idea.

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Why Everyone Loves Us

July 25, 2013

Ben Cravatt At The Challenges In Chemical Biology Conference

Email This Entry

Posted by Derek

Ben Cravatt is talking about this work on activity-based protein profiling of serine hydrolase enzymes. That's quite a class to work on - as he says, up to 2% of all the proteins in the body fall into this group, but only half of them have had even the most cursory bit of characterization. Even among the "known" ones, most of their activities are still dark, and only 10% of them have useful pharmacological tools.

He's detailed a compound (PF-3845) that Pfizer found as a screening hit for FAAH, which although it looked benign, turned out to be a covalent inhibitor due to a reactive arylurea. Pfizer, he says, backed off when this mechanism was uncovered - they weren't ready at the time for covalency, but he says that they've loosened up since then. Studying the compound in various tissues, including the brain, showed that it was extremely selective for FAAH.

Another reactive compound, JZL184, is an inhibitor of monoacylglycerol hydrolase (MAGL). Turns out that its carbamate group also reacts with FAAH, but there's a 300-fold window in the potency. The problem is, that's not enough. In mouse models, hitting both enzymes at the same time leads to behavioral problems. Changing the leaving group to a slightly less reactive (and nonaromatic) hexafluoroisopropanol, though, made the compound selective again. I found this quite interesting - most of the time, you'd think that 300x is plenty of room, but apparently not. That doesn't make things any easier, does it?

In response to a question (from me), he says that covalency is what makes this tricky. The half-life of the brain enzymes is some 12 to 14 hours, so by the time the next once-a-day dose comes in, there's still 20 or 30% of the enzyme still shut down, and things get out of hand pretty soon. For a covalent mechanism, he recommends 2000-fold or 5000-fold. On the other hand, he says that when they've had a serine hydrolase-targeted compound, they've never seen it react out of that class (targeting cysteine residues, though, is a very different story). And the covalent mechanism gives you some unique opportunities - for example, deliberate engineering a short half-life, because that might be all you need.

Comments (8) + TrackBacks (0) | Category: Chemical Biology | The Central Nervous System

Kurt Deshayes At The Challenges in Chemical Biology Conference

Email This Entry

Posted by Derek

Kurt Deshayes of Genentech has been speaking at the Challenges in Chemical Biology meeting, on protein-protein inhibitor work. And he's raised a number of issues that I think that we in drug discovery are going to have to deal with. For one thing, given the size of PPI clinical molecules like ABT-199, what does that tell us about what makes an orally available molecule? (And what does that tell us about what we think we know about the subject?) You'd think that many (most?) protein-protein inhibitors will be on the large side, and if you were to be doctrinaire about biophysical properties, you wouldn't go there at all. But it can be done - the question is, how often? And how do you increase your chances of success? I don't think that anyone doubts that more molecules with molecular weights of 1000 will have PK trouble than those with molecular weights of 300. So how do you lengthen the odds?

Another point he emphasized is that Genentech's work on XIAP led them to activities that they never would have guessed up front. The system, he points out, is just too complicated to make useful predictions. You have to go in an perturb it and see what happens (and small molecules are a great way to do that). I'd say that this same principle applies to most everything in biochemistry: get in and mess with the system, and let it tell you what's going on.

Comments (7) + TrackBacks (0) | Category: Chemical Biology

DNA Can Be Messed With More Than You'd Think

Email This Entry

Posted by Derek

Ali Tavassoli has just given a very interesting talk at the Challenges in Chemical Biology conference on his SICLOPPS method for generating huge numbers of cyclic peptides to screen for inhibitors of protein-protein interactions. I'll do a post in detail on that soon; it's one of those topics I've been wanting to tackle. His lab is applying this to a wide range of PPI systems.

But he had a neat update on another topic, as well. His group has made triazole-linked DNA sequences, and investigated how they behave in bacteria. He now reports that these things are biocompatible in mammalian cells (MCF-7).

This opens up some very interesting artificial-gene ideas, and I look forward to seeing what people can make of it. The extent to which DNA can be modified by things like triazole linkages is remarkable (see here and here for other examples). What else is possible?

Comments (3) + TrackBacks (0) | Category: Chemical Biology

Biogen Idec Goes Open-Office

Email This Entry

Posted by Derek

Here's a new development in the office/lab architecture topic, which has been the subject of lively discussion around here over the years. Biogen Idec has been putting up a new building (I've been following its progress as I go past it), and they're getting ready to move in. According to the Boston Globe, the entire thing is a completely office-less and cubicle-less space.

Building 9 has no private offices, just individually designed workstations called “I spaces” and common “huddle rooms” for private phone calls or spontaneous meetings. Each floor has two “walk stations” where employees can work while walking on treadmills. The company has scrapped telephone landlines for Building 9 employees, who are issued laptops and headsets.

“This whole idea of no offices is a little controversial,” admitted chief executive George Scangos. “It’s a new way of working. The idea is to foster more collaboration. People can talk to each other now. A lot of ideas can come out of these informal discussions.”

. . .But will some Biogen Idec recruits be pining for their own private offices?

“There may be some people who say, ‘I don’t want this, I want an office,’ ” Scangos acknowledged. After pausing, he said quietly, “Then they don’t come here.”

Problem is, like all other big-culture-change ideas, it takes years before you find out if it's working or not. But Biogen seems to be very big on the idea, and it'll be quite interesting to hear reports about how it's working (or not).

Thanks to Lisa Jarvis at C&E News for the tip, via Twitter.

Comments (49) + TrackBacks (0) | Category: Life in the Drug Labs

Kevan Shokat At The Challenges in Chemical Biology Conference

Email This Entry

Posted by Derek

Kevan Shokat is now talking about his lab's work on using Drosophila models for kinase inhibitor discovery in oncology. I always like hearing about this sort of thing; very small living models have a lot of appeal for drug discovery.

You'd think that screening in fruit flies would be problematic for understanding human efficacy, but if you pick your targets carefully, you can get it to work. In Shokat's case, he's looking at a kinase called Ret, which is a target in thyroid cancer and is quite highly conserved across species. They set up a screen where active compounds would rescue a lethal phenotype (which gives you a nice high signal-to-noise), and screened about a thousand likely kinase inhibitor molecules.

Here's the paper that discusses much of what Shokat's group found. It turned out that Ret kinase inhibition alone was not the answer - closely related compounds with very similar Ret activity had totally different phenotypes in the flies. The key was realizing that some of them were hitting and missing other kinases in the pathways (specifically Raf and TOR) that could cancel out (or enhance) the effects. This was a very nice job of direct discovery of the right sort of kinase fingerprint needed for a desired effect. We need more tiny critters for screens like these.

Comments (6) + TrackBacks (0) | Category: Cancer | Chemical Biology

Those Fortunate Onyx Option Traders

Email This Entry

Posted by Derek

Surprisingly, two people have come forward saying that they were among the people who bought options in Onyx Pharmaceuticals just before Amgen's bid for the company. This wildly profitable trade has attracted plenty of regulatory attention, and the SEC has already filed a civil lawsuit (and is hunting for defendants).

Dhia Jafar and Omar Nabulsi, both of Dubai, said a court-ordered freeze should be lifted on the $2.53 million profit that they made lawfully from buying Onyx call options in the last week of June, according to filings late Tuesday in U.S. District Court in Manhattan.

The defendants said that when they bought the call options, they had no material, non-public information that biotechnology company Amgen Inc was trying to buy its smaller rival for $10 billion, a hefty premium at the time.

Who knows? People do get lucky. And the fact that these two have come forward to defend their trades is unusual, and suggests that they think that they have a case to make. But I'll bet that not everyone who did that trade will be persuasive about their reasons.

Comments (4) + TrackBacks (0) | Category: Business and Markets

July 24, 2013

GSK Scandal Info

Email This Entry

Posted by Derek

Several comments to the posts on the GSK China problems seem quite. . .knowledgable. This has inspired a journalist to get in touch with me, but I've told her that (since these comments are anonymous) that I have no way of verifying their contents or getting in touch with the people who left them. She is, however, interested in hearing from people with knowledge of the situation, and has left a comment to that effect on the most recent post. I'm standing out of the way on this one; I merely note that the request is out there. Readers and commenters are free to use their own best judgment about what, if anything, to do about it.

Comments (3) + TrackBacks (0) | Category: The Dark Side

Udo Opperman At The Challenges in Chemical Biology Conference

Email This Entry

Posted by Derek

Now Udo Opperman is talking about histone modifications, which takes us into epigenetics. Whatever it is, epigenetics seems to be a big topic at this meeting - there are several talks and many posters addressing this area.

His efforts at Oxford and the Structural Genomics Consortium are towards generating chemical tools for all the histone-modifying enzymes (methylases/demethylases, acetylases/deacetylases, and so on). That covers a lot of ground, and a number of different mechanisms. To make things harder, they're going for tens of nanomolar in potency and high selectivity - but if these compounds are going to be really useful, that's the profile that they'll need.

One of the things that's coming up as these compounds become available is that these enzymes aren't necessarily confined to histones. Why shouldn't lysines, etc., on other proteins also be targets for regulation? Studies are just getting started on this, and it could well be that there are whole signaling networks out there that we haven't really appreciated.

Comments (0) + TrackBacks (0) | Category: Chemical Biology

Stuart Schreiber at the Challenges in Chemical Biology Conference

Email This Entry

Posted by Derek

I'm listening to Stuart Schreiber make his case for diversity-oriented synthesis (DOS) as a way to interrogate biochemistry. I've written about this idea a number of times here, but I'm always glad to hear the pitch right from the source.

Schreiber's team has about 100,000 compounds from DOS now, all of which are searchable at PubChem. He says that they have about 15mg of each of them in the archives, which is a pretty solid collection. They've been trying to maximize the biochemical diversity of their screening (see here and here for examples), and they're also (as noted here) building up a collection of fragments, which he says will be used for high-concentration screening.

He's also updating some efforts with the Gates Foundation to do cell-based antimalarial screening with the DOS compounds. They have 468 compounds that they're now concentrating on, and checking these against resistant strains indicates that some of them may well be working through unusual mechanisms (others, of course, are apparently hitting the known ones). He's showing structures, and they are very DOSsy indeed - macrocycles, spiro rings, chirality all over. But since these assay are done in cells, some large hoops have already been jumped through.

He's also talking about the Broad Institutes efforts to profile small-molecule behavior in numerous tumor cell lines. Here's a new public portal site on this, and there's apparently a paper accepted at Cell on it as well. They have hundreds of cell lines, from all sorts of sources, and are testing those against an "informer set" of small-molecule probes and known drugs. They're trying to make this a collection of very selective compounds, targeting a wide variety of different targets throughout the cell. There are kinase inhibitors, epigenetic compounds, and a long list of known oncology candidates, as well as many other compounds that don't hit obvious cancer targets.

They're finding out a lot of interesting things about target ID with this set. Schreiber says that this work has made him more interested in gene expression profiles than in mutations per se. Here, he says, is an example of what he's talking about. Another example is the recent report of the natural product austocystin, which seems to be activated by CYP metabolism. The Broad platform has identified CYP2J2 as the likely candidate.

There's an awful lot of work on these slides (and an awful lot of funding is apparent, too). I think that the "Cancer Therapeutics Response Portal" mentioned above is well worth checking out - I'll be rooting through it after the meeting.

Comments (23) + TrackBacks (0) | Category: Cancer | Chemical Biology | Infectious Diseases

More Details on T-Cell Leukemia Therapy

Email This Entry

Posted by Derek

There's an excellent overview at Science of the work of David Porter and Carl June at the University of Pennsylvania on T-cell-based cancer therapy. It turns out that when the dramatic reports came out on their first three patients, the team was out of funding and trying to see if they could get someone interested. They did:

. . .Porter and June weighed their next step. They were itching to test the cell therapy in more people with leukemia, and to do that they needed money that they didn’t have. “We basically decided that we would just publish with three patients,” June says. Getting the word out, he hoped, could shift the dynamic in their favor. Porter was game to try, but skeptical that any reputable journal would accept a paper with an n of 3.

He turned out to be wrong. The New England Journal of Medicine welcomed a report about Olson and his mouse dose of T cells. Science Translational Medicine, Science’s sister journal, snapped up a manuscript detailing all three patients. The papers were published simultaneously on 10 August 2011. . .Porter was en route to vacation in western Maryland with his family when the embargo lifted. His phone started ringing. “I was in the car for 8 hours that day,” he says. “I spent 8 hours straight on my phone, answering e-mail, answering phone calls. It was a story that took us all by surprise. It kind of went viral.” June fielded 5000 requests from patients and their families for the therapy. Eight hundred media outlets worldwide covered the story.

And the funding reappeared, as well it might. Now the problem is turning this into something that can be used routinely, and that is nontrivial, as we technical types say. T-cell therapy is patient-specific. You don't just start treating everyone with injections out of the vials that you keep in the fridge - every patient is a new experiment, and the process starts from scratch. That means that many sources of error and variability that are ironed out with a traditional drug therapy are still going to be present, every time, for every person, and it also means that the cost is going to be high. But it may well be worth every bit of the trouble and expense.

The article gives a good look at how hard it is for a discovery like this to be born. The first person to try modifying T cells as an anticancer agent was probably Zelig Eshhar at the Weizmann Institute, back in the 1980s. Then a few other labs picked up the idea, notably Michel Sadelain at Sloan-Kettering, Steven Rosenberg at NCI, and Malcolm Brenner at Baylor, but technical difficulties slowed things down at every turn. Isolating the T cells reproducibly, inserting new genes into them, figuring out what genes to insert, getting everything successfully back into a patient - each of these steps took years of work and frustration.

Success came as everyone narrowed down on the CD19 protein on the surface of B cells. Those were attractive targets, because you can actually survive without them - which was a key hurdle, because once you unleash the T cells, they're probably going to kill off everything they're targeted for. It turns out that the CD19 marker is basically universal in B-cell leukemias, so this looked like the best targets on several grounds. There were actually four other trials (using very similar approaches) running at other centers when Porter and June got going.

But the combination of stimulatory signals and the choice of vector in the Penn trials set off the extraordinary clinical effects. There was no way to know this - in fact, some other approaches looked a bit more promising. But that's clinical research, and that's oncology, for sure.

Unfortunately, but predictably, there have been legal problems. St. Jude and Penn are involved in lawsuits about prior research agreements, and whether the current therapies are covered under them. I assume that this will be worked out, to the enrichment of a phalanx of lawyers, but it's unfortunate. It doesn't seem to be slowing anyone down much, though, which is the good news. Trials are underway all over the place on variations of this idea, and the Penn group is about as busy as they could possibly be:

Still, physicians like Porter and Grupp are mindful that this isn’t life-changing for every- one. “When I’m doing informed consent with these families, the first thing I say is, ‘Forget everything you’ve read about this,’ ” Grupp says. “Nothing could possibly be as promis- ing as the various articles about this make it seem.” Only four people, including Emily, have been followed for more than a year. A looming question is whether CAR therapy can work in solid tumors, and June and others are opening clinical trials to try and find out.

Nearly 3 years after the summer that changed everything, the Penn group is still working flat out to keep up: enrolling as many patients on the trials as they can, working with drug regulators to discuss how best to study the cells with an eye toward approval, collaborating with Novartis to train their employees and streamline the cell-generating process.

This all should be seen in a larger context of immunotherapy, too. People have been trying to recruit the immune system for years in the fight against tumor cells, with mixed success. But we may be just on the verge of knowing enough about what we're doing to get more of these to work. At this point, it would not surprise me if immune system approaches become the dominant form of treatment for several types of cancer over the next 25 years. The next few years will tell us.

Comments (7) + TrackBacks (0) | Category: Cancer | Clinical Trials

July 23, 2013

A Chemical Biology Conference

Email This Entry

Posted by Derek

I wanted to mention that I'll be attending this conference ("Challenges in Chemical Biology") over the next few days, and hope to blog some from the sessions. It's not much of a trip - MIT is just down the street from my office - but it looks to be a good meeting. Any readers who might also be attending, feel free to get in touch!

Comments (1) + TrackBacks (0) | Category: Blog Housekeeping

Resveratrol: What's It Do For Mitochondria?

Email This Entry

Posted by Derek

I've been meaning to blog about this new paper in PLOS Biology on resveratrol's effects on mitochondria. It's suggesting that the results previously reported in this area cannot be reproduced, namely the idea that resveratrol increases mitochondrial biogenesis and running endurance. In fact, says this new paper, the whole mechanistic story advanced in this field (resveratrol activates SIRT1, which activates the coactivator PGC1, which cranks up the mitochondria) is wrong. SIRT1 has, they say, the opposite effect: it decreases PGC1 activity, and downregulates mitochondria.

That's an interesting dispute, and leads to all kinds of questions about who's wrong (because someone certainly appears to be). But there's another issue peculiar to this new paper. It now says that there are no reader comments, but for a couple of days there was one, which went into detail about how various Western blots appeared to have been performed sloppily and with confusing control lanes. I have no idea how well substantiated these objections were, and I have no idea why they have disappeared from the paper. It's all quite peculiar.

Comments (15) + TrackBacks (0) | Category: Aging and Lifespan

One GSK China Scandal Blends Into Another

Email This Entry

Posted by Derek

According to the New York Times, the problems with GSK's China operations have been going on for a while. It's worth distinguishing two types of trouble, though: there's the bribery scandal, where the company's representatives have been paying off people up and down the Chinese health system, and there's the scientific scandal at the Shanghai R&D site, which has led to a very public retraction and dismissal of employees. I make this distinction because the research end and the commercial end of a given drug company are usually quite far apart from each other; you have to go very high up the chain to find someone who's in charge of both.

What the Times has bears on the R&D problems, and it's not good. THey've obtained a confidential document dated November 2011:

Executives at the British drug maker GlaxoSmithKline were warned nearly two years ago about critical problems with the way the company conducted research at its drug development center in China, exposing it to potential financial risk and regulatory action, an internal audit found. . .

Auditors found that researchers did not report the results of animal studies in a drug that was already being tested in humans, a breach that one medical ethicist described as a “mortal sin” in the world of drug research. They also concluded that workers at the research center did not properly monitor clinical trials and paid hospitals in ways that could be seen as bribery.

That last part refers to a practice of paying clinical trial coordinators a flat fee for their services, regardless of how many people were enrolled at their site. This could be a way of paying someone for supposedly doing a full-time job when they're actually doing nothing of the kind. And that, I have to say, sort of mixes the paint together for all these stories: if even the clinical development group was paying people off, where does it end? Now we have a scientific scandal, a bribery scandal, and a scientific bribery scandal - if this goes on, I'm going to have to make a chart to keep it all straight.

I've been saying unkind and cynical things about the Chinese government while writing about the bribery scandal, and I don't plan on taking any of that back. But there are unkind things to say about GlaxoSmithKline, too. With all the information that's coming out, you have to wonder how well GSK was keeping an eye on things. The Chinese market is so huge, and so potentially lucrative, that some companies might just be tempted to say "OK, you folks are the XYZ Corporation's Chinese branch. Do what you need to do to stay competitive over here, but don't tell us about it, OK?" But I don't think that's something you can get away with, not forever. It catches up with you, especially when dealing with a government like China's that has no problem pitching high and inside when they feel the need.

GSK is a big company, full of people who understand how the world works. The Times document shows that they were aware of what was going on, and what could happen. And here it is, happening. Anyone on the inside who was sounding the alarm probably isn't getting much satisfaction about saying "I told you so", though.

Comments (31) + TrackBacks (0) | Category: Clinical Trials | The Dark Side

July 22, 2013

Update on the Bribery Scandal

Email This Entry

Posted by Derek

This Reuters story may be telling people everything they need to know in the first paragraph:

British drugmaker GlaxoSmithKline said on Monday some of its executives in China appeared to have broken the law in a bribery scandal, as it promised changes in its business model that would lower the cost of medicine in the country.

GSK is the latest in a string of multinationals to be targeted by Chinese authorities over alleged corruption, price-fixing and quality controls.

Chinese police visited the Shanghai office of another British drugmaker, AstraZeneca, a company spokeswoman said on Monday. They arrived on Friday and took away a sales representative for questioning, she said.

Will AstraZeneca cut its prices for China as well? In case some readers may think I'm drawing conclusions too quickly, well, the Reuters story draws them for you:

GSK's intention to cut the price of its medicines in China would be in line with how other foreign companies have responded to pressure from Beijing.

European food groups Nestle and Danone said they would cut infant milk formula prices in China after Beijing launched an inquiry into the industry.

"In China, when the government criticises people, they tend to bow down and apologise very quickly because they are scared of the authority of the central government to do tremendous harm to their business - whether it be for arresting executives very quickly or through auditing," said Shaun Rein, managing director of the Shanghai-based China Market Research Group.

Comments (11) + TrackBacks (0) | Category: Business and Markets

The NIH's Drug Repurposing Program Gets Going

Email This Entry

Posted by Derek

Here's an update on the NIH's NCATS program to repurpose failed clinical candidates from the drug industry. I wrote about this effort here last year, and expressed some skepticism. It's not that I think that trying drugs (or near-drugs) for other purposes is a bad idea prima facie, because it isn't. I just wonder about the way the way the NIH is talking about this, versus its chances for success.

As was pointed out last time this topic came up, the number of failed clinical candidates involved in this effort is dwarfed by the number of approved compounds that could also be repurposed - and have, in fact, been looked at for years for just that purpose. The success rate is not zero, but it has not been a four-lane shortcut to the promised land, either. And the money involved here ($12.7 million split between nine grants) is, as that Nature piece correctly says, "not much". Especially when you're going after something like Alzheimer's:

Strittmatter’s team is one of nine that won funding last month from the NIH’s National Center for Advancing Translational Sciences (NCATS) in Bethesda, Maryland, to see whether abandoned drugs can be aimed at new targets. Strittmatter, a neuro­biologist at Yale University in New Haven, Connecticut, hopes that a failed cancer drug called saracatinib can block an enzyme implicated in Alzheimer’s. . .

. . .Saracatinib inhibits the Src family kinases (SFKs), enzymes that are commonly activated in cancer cells, and was first developed by London-based pharmaceutical company Astra­Zeneca. But the drug proved only marginally effective against cancer, and the company abandoned it — after spending millions of dollars to develop it through early human trials that proved that it was safe. With that work already done, Strittmatter’s group will be able to move the drug quickly into testing in people with early-stage Alzheimer’s disease.

The team plans to begin a 24-person safety and dosing trial in August. If the results are good, NCATS will fund the effort for two more years, during which the scientists will launch a double-blind, randomized, placebo-controlled trial with 159 participants. Over a year, the team will measure declines in glucose metabolism — a marker for progression of Alzheimer’s disease — in key brain regions, hoping to find that they have slowed.

If you want some saracatanib, you can buy some, by the way (that's just one of the suppliers). And since AZ has already taken this through phase I, then the chances for it passing another Phase I are very good indeed. I will not be impressed by any press releases at that point. The next step, the Phase IIa with 159 people, is as far as this program is mandated to go. But how far is that? One year is not very long in a population of Alzheimer's patients, and 159 patients is not all that many in a disease that heterogeneous. And the whole trial is looking at a secondary marker (glucose metabolism) which (to the best of my knowledge) has not demonstrated any clinical utility as a measure of efficacy for the disease. From what I know about the field, getting someone at that point to put up the big money for larger trials will not be an easy sell.

I understand the impulse to go after Alzheimer's - who dares, wins, eh? But given the amount of money available here, I think the chances for success would be better against almost any other disease. It is very possible to take a promising-looking Alzheimer's candidate all the way through a multi-thousand-patient multiyear Phase III and still wipe out - ask Eli Lilly, among many others. You'd hope that at least a few of them are in areas where there's a shorter, more definitive clinical readout.

Here's the list, and here's the list of all the compounds that have been made available to the whole effort so far. Update: structures here. The press conference announcing the first nine awards is here. The NIH has not announced what the exact compounds are for all the grants, but I'm willing to piece it together myself. Here's what I have:

One of them is saracatanib again, this time for lymphangioleiomyomatosis. There's also an ER-beta agonist being looked at for schizophrenia, a J&J/Janssen nicotinic allosteric modulator for smoking cessation, and a Pfizer ghrelin antagonist for alcoholism (maybe from this series?). There's a Sanofi compound for Duchenne muscular dystrophy, which the NIH has studiously avoided naming, although it's tempting to speculate that it's riferminogene pecaplasmide, a gene-therapy vector for FGF1. But Genetic Engineering News says that there are only seven compounds, with a Sanofi one doubling up as well as the AZ kinase inhibitor, so maybe this one is the ACAT inhibitor below. Makes more sense than a small amount of money trying to advance a gene therapy approach, for sure.

There's an endothelin antagonist for peripheral artery disease. Another unnamed Sanofi compound is being studied for calcific aortic valve stenosis, and my guess is that it's canosimibe, an ACAT inhibitor, since that enzyme has recently been linked to stenosis and heart disease. Finally, there's a Pfizer glycine transport inhibitor being looked at for schizophrenia, which seems a bit odd, because I was under the impression that this compound had already failed in the clinic for that indication. They appear to have some other angle.

So there you have it. I look forward to seeing what comes of this effort, and also to hearing what the NIH will have to say at that point. We'll check in when the time comes!

Update: here's more from Collaborative Chemistry. And here's a paper they published on the problems of identifying compounds for initiatives like this:

In particular, it is notable that NCATS provides on its website [31] only the code number, selected international non-proprietary names (INN) and links to more information including mechanism of action, original development indication, route of administration and formulation availability. However, the molecular structures corresponding to the company code numbers were not included. Although we are highly supportive of the efforts of NCATS to promote drug repurposing in the context of facilitating and funding proposals, we find this omission difficult to understand for a number of reasons. . .

They're calling for the NIH (and the UK initiative in this area as well) to provide real structures and IDs for the compounds they're working with. It's hard to argue against it!

Comments (8) + TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development

July 19, 2013

Salary Freeze at Lilly

Email This Entry

Posted by Derek

We now return to our regularly schedule program around here - or at least, Eli Lilly is now returning to theirs. The company announced that they're freezing salaries for most of the work force, in an attempt to save hundreds of millions of dollars in advance of their big patent expirations. Some bonuses will be reduced as well, they say, but that leaves a lot of room. Higher-ups don't look for increases in base pay as much as they look for bonuses, options, and restricted shares (although, to be fair, these are often awarded as a per cent of salary).

‘‘This action is necessary to withstand the impact of upcoming patent expirations and to support the launch of our large phase III pipeline,’’ Chief Executive Officer John Lechleiter, 59, said in a letter to employees today, a copy of which was obtained by Bloomberg. ‘‘The current situation requires us to take the appropriate action now to secure our company’s future. We can’t allow ourselves to let up and fail to make the tough choices.”

Lechleiter himself has not had a raise since 2010, it appears, although I'm not sure if his non-salary compensation follows the same trend. If anyone has the time to dig through the company's last few proxy statements, feel free, but actually figuring out what a chief executive is really paid is surprisingly difficult. (I remember an article a few years ago where several accountants and analysts were handed the same batch of SEC filings and all of them came out with different compensation numbers).

But there's not doubt that Lilly is in for it, something that has been clear for some time now. The company's attempts to shore up its clinical pipeline haven't gone well, and it looks like (more and more) they're putting a lot of their hopes on a success in Alzheimer's. If they see anything, that will definitely turn the whole situation around - between their diagnostic branch and a new therapeutic, they'll own the field, and a huge field it is. But the odds of this happening are quite low. The most likely outcome, it seems to me, is equivocal data that will be used to put pressure on the FDA, etc., to approve something, anything, for Alzheimer's.

It's worth remembering that it wasn't very long ago at all that the higher-ups at Lilly were telling everyone that all would be well, that they'd be cranking out two big new drugs a year by now. Hasn't happened. Since that 2010 article, they've had pretty much squat - well, Jentadueto, which is Boehringer Ingleheim's linagliptin, which Lilly is co-marketing, with metformin added. Earlier this year, they were talking up plans for five regulatory submissions in the near future, but that figure is off now that enzastaurin has already bombed in Phase III. Empagliflozin and ramucirumab are still very much alive, but will be entering crowded markets if they make it through. Dulaglutide is holding up well, though.

But will these be enough to keep Lilly from getting into trouble? That salary freeze is your answer: no, they will not. All the stops must be pulled out, and the ones after this will be even less enjoyable.

Comments (19) + TrackBacks (0) | Category: Business and Markets | Drug Development

Good Advice: Get Lost!

Email This Entry

Posted by Derek

I thought everyone could use something inspirational after the sorts of stories that have been in the news the last few days. Here's a piece at FierceBiotech on Regeneron, a company that's actually doing very well and expanding. And how have they done it?

Regeneron CEO Dr. Leonard "Len" Schleifer, who founded the company in 1988, says he takes pride in the fact that his team is known for doing "zero" acquisitions. All 11 drugs in the company's clinical-stage pipeline stem from in-house discoveries. He prefers a science-first approach to running a biotech company, hiring Yancopoulos to run R&D in 1989, and he endorsed a 2012 pay package for the chief scientist that was more than twice the size of his own compensation last year.

Scientists run Regeneron. Like Yancopoulos, Schleifer is an Ivy League academic scientist turned biotech executive. Regeneron gained early scientific credibility with a 1990 paper in the journal Science on cloning neurotrophin factor, a research area that was part of a partnership with industry giant Amgen. Schleifer has recruited three Nobel Prize-winning scientists to the board of directors, which is led by long-time company Chairman Dr. P. Roy Vagelos, who had a hand in discovering the first statin and delivering a breakthrough treatment for a parasitic cause of blindness to patients in Africa.

"I remember these people from Pfizer used to go around telling us, 'You know, blockbusters aren't discovered, they're made,' as though commercial people made the blockbuster," Schleifer said in an interview. "Well, get lost. Science, science, science--that's what this business is about."

I don't know about you, but that cheers me up. That kind of attitude always does!

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 18, 2013

BMS Moving Jobs to Florida

Email This Entry

Posted by Derek

The state of Florida is press-releasing that Bristol-Myers Squibb is opening a "capability center" in the Tampa area, which will bring 579 jobs. What they're not saying - but what I hear through the grapevine - is that some of these jobs were formerly somewhere else. I don't know how many of the total are actually new positions.

They seem mostly to be support staff (I have to say, I have no clear picture of what a capability center actually is), and the affected people were notified late this afternoon. At least it opens in January, when a move to Tampa will be somewhat more tolerable. . .

Comments (11) + TrackBacks (0) | Category: Business and Markets

The Junk DNA Wars Get Hotter

Email This Entry

Posted by Derek

Thanks to an alert reader, I was put on to this paper in PNAS. It's from a team at Washington U. in St. Louis, and my fellow Cardinals fans are definitely stirring things up in the debate over "junk DNA" function and the ENCODE results. (The most recent post here on the debate covered the "It's functional" point of view - for links to previous posts on some vigorous ENCODE-bashing publications, see here).

This new paper, blogged about here at Homologus and here by one of its authors, Mike White, is an attempt to run a null-hypothesis experiment on transcription factor function. There are a lot of transcription factor recognition sequences in the genome. They're short DNA sequences that serve as flags for the whole transcription machinery to land and start assembling at a particular spot. Transcription factors themselves are the proteins that do the primary recognition of these sequences, and that gives them plenty to do. With so many DNA motifs out there (and so many near-misses), some of their apparent targets are important and real and some of them may well be noise. TFs have their work cut out.

What this new paper did was look at a particular transcription factor, Crx. They took a set of 1,300 sequences that are (functionally) known to bind it - 865 of them with the canonical recognition motifs and 433 of them that are known to bind, but don't have the traditional motif. They compared that set to 3,000 control sequences, including 865 of them "specifically chosen to match the Crx motif content and chromosomal distribution" as compared to that first set. They also included a set of single-point mutations of the known binding sequences, along with sets of scrambled versions of both the known binding regions and the matched controls above, with dinucleotide ratios held constant - random but similar.

What they found, first, was that the known binding elements do indeed drive transcription, as advertised, while the controls don't. But the ENCODE camp has a broader definition of function than just this, and here's where the dinucleotides hit the fan. When they looked at gene repression activity, they found that the 865 binders and the 865 matched controls (with Crx recognition elements, but in unbound regions of the genome) both showed similar amounts of activity. As the paper says, "Overall, our results show that both bound and unbound Crx motifs, removed from their genomic context, can produce repression, whereas only bound regions can strongly activate".

So far, so good, and nothing that the ENCODE people might disagree with - I mean, there you are, unbound regions of the genome showing functional behavior and all. But the problem is, most of the 1,300 random sequences also showed regulatory effects:

Our results demonstrate the importance of comparing the activity of candidate CREs (cis-regulatory elements - DBL) against distributions of control sequences, as well as the value of using multiple approaches to assess the function of CREs. Although scrambled DNA elements are unlikely to drive very strong levels of activation or repression, such sequences can produce distinct levels of enhancer activity within an intermediate range that overlaps with the activity of many functional sequences. Thus, function cannot be assessed solely by applying a threshold level of activity; additional approaches to characterize function are necessary, such as mutagenesis of TF binding sites.

In other words, to put it more bluntly than the paper does, one could generate ENCODE-like levels of functionality with nothing but random DNA. These results will not calm anyone down, but it's not time to calm down just yet. There are some important issues to be decided here - from theoretical biology all the way down to how many drug targets we can expect to have. I look forward to the responses to this work. Responses will most definitely be forthcoming.

Comments (12) + TrackBacks (0) | Category: Biological News

China's GlaxoSmithKline Crackdown

Email This Entry

Posted by Derek

Keeping up with the GlaxoSmithKline/China story has been hard - every day or two there's a new twist. But here's what's going on so far:

Four GSK executives have been arrested on charges of bribery. Hospitals, doctors, officials of all kinds - the accusations are the the GSK people jacked up prices and sales figures by greasing people everywhere they thought necessary. Report have it that travel agencies (to inflate the costs of meetings and trips as a form of payment), high-priced consultation deals, and good ol' sexual favors were involved. In addition to the four executives who've been arrested, China has told GSK's financial director for that unit that he's not allowed to leave the country.

A mess indeed, and pretty much the last thing that GSK was in the market for, I'll bet. I am, sadly, not amazed at the idea of large organized bribery in the Chinese market. Nor, I'm sure, are the Chinese authorities. The country has a well-publicized problem with corruption, with high-level officials regularly being removed from their positions amid accusations of all sorts of malfeasance. Even if you mark some of that up to political maneuvering and score-settling (which I'm sure are factors, too), the country's current system of authoritarian capitalism is an invitation to such behavior on every level. Every country in the world has this sort of thing to some degree - who you know, who you're related to, who owes you favors, who you've paid off - but the combination of China's one-party system and its huge business boom of the last decades combine to make it a particular problem there.

It also combines to breed conspiracy theories. You might wonder if GSK is in trouble because their behavior was particularly noticeable or on a large scale, of if there's some other reason that we're not seeing. It's impossible to say, and not very fruitful to speculate on, but it's not a line of thought that can be dismissed easily, either. Perhaps the idea was pour encourager les autres. This article is along those lines:

A Chinese bribery investigation into British drugmaker GlaxoSmithKline (GSK.L) has sent tremors through multinational pharmaceutical firms in China, prompting at least one to review how they do business in the country.

Experts said foreign companies across the spectrum were watching closely to see what happened to GSK and its four detained Chinese executives given bribery and business go hand-in-hand in the world's second biggest economy. . .

Pharmaceutical companies are at the mercy of Chinese regulators in getting products licensed for import or manufacture in China, or to get them listed on the national drug registry. They typically rely on hired distributors to get their drugs to market and into hospitals. . .

. . .According to sources with knowledge of the industry, China's sophisticated and thriving market for fake documents also allows local employees to provide forged paperwork to more senior or global managers.

Efforts made by drug firms at compliance training can even backfire, as some employees learn how to avoid detection.

That Reuters piece also mentions speculation that the Chinese government is leaning hard on drug companies for better pricing, as it faces mounting health care costs, and you can't rule that one out, either. That's the problem - you can't rule much of anything out at all.

Comments (23) + TrackBacks (0) | Category: Current Events | The Dark Side

July 17, 2013

The GSK Jackpot

Email This Entry

Posted by Derek

Well, this got my attention: according to the Sunday Times, GlaxoSmithKline is preparing to hand out hefty bonus payments to scientists if they have a compound approved for sale. Hefty, in this context, means up to several million dollars. The earlier (and much smaller) payouts for milestones along the way will disappear, apparently, to be replaced by this jackpot.

The article says that "The company will determine who is entitled to share in the payout by judging which staff were key to its discovery and development", and won't that be fun? In Germany, the law is that inventors on a corporate patent do get a share of the profits, which can be quite lucrative, but it means that there are some very pointed exchanges about just who gets to be an inventor. The prospect of million-dollar bonuses will be very welcome, but will not bring the best in some people, either. (It's not clear to me, though, if these amounts are to be split up among people somehow, or if single individuals can possibly expect that much).

John LaMattina has some thoughts on this idea here. He's also wondering how to assign credit:

I am all for recognizing scientists in this way. After all, they must be successful in order for a company the size of GSK to have a sustaining pipeline. However, the drug R&D process is really a team effort and not driven by an individual. The inventor whose name is on the patent is generally the chemist or chemists who designed the molecule that had the necessary biological activity. Rarely, however, are chemists the major contributor to the program’s success. Oftentimes, it is a biologist who conceives the essence of the program by the scientific insight he or she might have. The discovery of Pfizer’s Xeljanz is such a case. There have been major classes of drugs that have been saved by toxicologists who ran insightful animal experiments to explain aberrant events in rats as was done by Merck with both the statins and proton-pump inhibitors – two of the biggest selling classes of drugs of all time.

On occasion, the key person in a drug program is the process chemist who has designed a synthesis of the drug that is amenable to the large scales of material needed to conduct clinical trials. Clinical trial design can also be crucial, particularly when studying a drug with a totally new mechanism of action. A faulty trial design can kill any program. Even a nurse involved in the testing of a drug can make the key discovery, as happened in Pfizer’s phase 1 program with Viagra, where the nurse monitoring the patients noticed that the drug was enhancing blood flow to an organ other than the heart. To paraphrase Hilary Clinton, it takes a village to discover and develop a drug.

You could end up with a situation where the battery is arguing with the drive shaft, both of whom are shouting at the fuel pump and refusing to speak to the tires, all because there was a reward for whichever one of them was the key to getting the car to go down the driveway.

There's another problem - getting a compound to go all the way to the market involves a lot of luck as well. No one likes to talk about that very much - it's in everyone's interest to show how it was really due to their hard work and intelligence - but equal amounts of hard work and brainpower go into projects that just don't make it. Those are necessary, but not sufficient. So if GSK is trying to put this up as an incentive, it's only partially coupled to factors that the people it's aimed at can influence.

And as LaMattina points out, the time delay in getting drugs approved is another factor. If I discover a great new compound today, I'll be lucky to see it on the market by, say, 2024 or so. I have no objection to someone paying me a million dollars on that date, but it won't have much to do with what I've been up to in the interim. And in many cases, some of the people you'd want to reward aren't even with the company by the time the drug makes it through, anyway. So while I cannot object to drug companies wanting to hand out big money to their scientists, I'm not sure what it will accomplish.

Comments (71) + TrackBacks (0) | Category: Business and Markets | Drug Development | Who Discovers and Why

More on the NIH and Its Indian Clinical Trials

Email This Entry

Posted by Derek

Steve Usdin of BioCentury sends along word that they've managed to get a tiny bit more out of the NIH on the Indian clinical trials business. As opposed to the happy-talk that they gave FiercePharma the day before, the agency was now willing to confirm that enrollment has been stopped in some Indian trials, while others have been postponed. No numbers, though. They said that they hoped that "future changes will enable studies to resume", which is a bit of a telling statement in itself, suggesting that the current situation will not allow that at all.

The most detailed account of the situation remains the report in the Live Mint newspaper, an Indian source affiliated with the Wall Street Journal. That article mentions the "unstable regulatory environment" as the big factor, but according to BioCentury, this might be the biggest problem:

The new regulations require clinical trial sponsors to provide compensation to patients who suffer injury or death during or as a result of the trial, including as a result of the "failure of investigational product to provide intended therapeutic effect"

Oh, boy. If companies find themselves having to compensate everyone - in unspecified amounts - that joins a Phase II trial where the compound turns out not to work, that'll mount up fast. We have high failure rates around here, as everyone knows, and everyone involved (investors, patients, clinical trial participants) should be aware of that going in and act accordingly. I believe that both companies and granting agencies feel as if they're paying quite enough money already for the way that many drugs don't work in the clinic.

Comments (3) + TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs

MedChemica: When One Compound Collection Isn't Enough

Email This Entry

Posted by Derek

According to SciBx, here's another crack at computational solutions for drug discovery: MedChemica, a venture started by several ex-AstraZeneca scientists. They're going to be working with data from both AZ and Roche, using what sounds like a "matched molecular pair" approach:

Although other algorithms try to relate structure to biological function, most of the analyses look at modifications across a wide array of diverse structures. MedChemica's approach is to look at modifications in a set of similar structures and see how minor differences affect the compounds' biological activity.
Al Dossetter, managing director of MedChemica, said the advantage of the company's platform is the WizePairZ algorithm that looks at pairs of fragments that are similar in structure but differ by a chemical group, such as a change from chlorine to fluorine or the addition of a methyl group.
This platform, he told SciBX, captures the chemical environment of the fragment change. For example, it incorporates the fact that the effect of changing chlorine to fluorine on a molecule will depend on the surrounding structure. The result is a rule that is context dependent.
The MedChemica approach applies to small molecules and uses only partial chemical structures, thus keeping compound identities out of the picture.
Because the platform does not reveal compound identities, AstraZeneca and Roche can share knowledge without disclosing proprietary information.

The belief is that neither company's database on its own gives quite enough statistical power for this approach to work, so they're trying it on the pooled data:

smaller databases only allow researchers to extract one to five matched pairs, which have a low fidelity of prediction. Ten matched pairs are sufficient to draw a prediction, but reliability increases significantly with 20 matched pairs.
The MedChemica database contains 1.2 million datapoints, each of which represents a single molecule fragment in a single assay. It includes 31 different assays, although more are likely to be added in the future, and not all molecules have been tested in all assays.

The article says that AZ and Roche are in discussions with other companies about joining the collaboration. Everyone who joins will get a copy of the pooled database, in addition to being able to share in whatever insights MedChemica comes up with. A limitation is mentioned as well: this is all in vitro data, and its translation to animals or to the clinic provides room to argue.

That's a real concern, I'd say, although I can certainly see why they're doing things the way that they are. It's probably hard enough coming up with in vitro assays across the two companies that are run under similar enough conditions to be usefully paired. In vivo protocols are more varied still, and are notoriously tricky to compare across projects even inside the same company. Just off the top of my head, you have the dosing method (i.v., p.o., etc.), the level of compound given, the vehicle and formulation (a vast source of variability all in itself), the species and strain of animal, the presence of any underlying disease model (versus control animals), what time of day they were dosed and whether they were fed or fasted, whether they were male or female, how old the animals were, and so on and so on. And these factors would be needed just to compare things like PK data, blood levels and so on. If you're talking about toxicology or other effects, there's yet another list of stuff to consider. So yes, the earlier assays will be enough to handle for now.

But will they be enough to provide useful information? Here's where the arguing starts. Limitations of working with only in vitro data aside, you could also say that any trends that are subtle enough to need multi-company-sized pools of data might be too subtle to affect drug discovery very much. The counterargument to that is that some of these rules might still be quite real, but lost in the wilds of chemical diversity space due to lack of effective comparisons. (And the counterargument to that is that if you don't have very many example, how are you so sure that it's a rule?) I'm not sure which side of that one I come down on - "skeptical but willing to listen to data" probably describes me here - but this is the key question that MedChemica will presumably answer, one way or another.

Even so, that in vitro focus is going to be a long-term concern. One of the founders is quoted in the article as saying that the goal is to learn how to predict which compounds shouldn't be made. Fine, but "shouldn't have been made" is a characteristic that's often assigned only after a compound has been dosed in vivo. In the nastier cases, the ones you want to avoid the most, it's only realized after a compound has been in hundreds or thousands of humans in the clinic. The kinds of rules that MedChemica will come up with won't have any bearing on efficacy failures (nor are they meant to), but efficacy failures - failures of biological understanding - are depressingly common. Perhaps they've got a better chance at cutting down the number of "unexplained tox" failures, but that's still a very tall order as well as a very worthy goal.

Falling short of that, I worry, will mean that the MedChemica approach might end up - even if it works - by only optimizing a bit the shortest and cheapest part of the whole drug discovery process, preclinical med-chem. I sympathize - most of my own big ideas, when I get them, bear only on that part of the business, too. But is it the part that needs to be fixed the most? The hope is that there's a connection, but it takes quite a while to prove if one exists.

Comments (8) + TrackBacks (0) | Category: In Silico

July 16, 2013

Touching Up the Spectra

Email This Entry

Posted by Derek

Organic chemists have been taking NMR spectra for quite a while now. Routine use came on in the 1960s, and higher-field instruments went from exotic big-ticket items in the 1970s to ordinary equipment in the 1980s. But NMR can tell you more about your sample than you wanted to know (good analytical techniques are annoying that way). So what to do when you have those little peaks showing up where no peaks should be?

The correct answer is "Live with 'em or clean up your sample", but wouldn't it be so much easier and faster to just clean up the spectrum? After all, that's all that most people are ever going to see - right? This little line of thought has occurred to countless chemists over the years. Back In The Day, the technology needed to remove solvent peaks, evidence of isomers, and other pesky impurities was little more than a bottle of white-out and a pen (to redraw the lovely flat baseline once the extra peaks were daubed away). Making a photocopy of the altered spectrum gave you publication-ready purity in one easy step.

NMR spectra are probably the most-doctored of the bunch, but LC/MS and HPLC traces are very capable of showing you peaks you didn't want to see, either. These days there are all sorts of digital means to accomplish this deception, although I've no doubt that the white-out bottle is still deployed. In case anyone had any doubt about that, last month Amos Smith, well-known synthetic organic chemist and editor of Organic Letters, had this to say in a special editorial comment in the journal:

Recently, with the addition of a Data Analyst to our staff, Organic Letters has begun checking the submitted Supporting Information more closely. As a result of this increased scrutiny, we have discovered several instances where reported spectra had been edited to remove evidence of impurities.

Such acts of data manipulation are unacceptable. Even if the experimental yields and conclusions of a study are not affected, ANY manipulation of research data casts doubts on the overall integrity and validity of the work reported.

That it does. He went on to serve notice on authors that the journal will be checking, and will be enforcing and penalizing. And you can tell that Smith and the Org Lett staff have followed up on some of these already, because they've already had a chance to hear the default excuse:

In some of the cases that we have investigated further, the Corresponding Author asserted that a student had edited the spectra without the Corresponding Author’s knowledge. This is not an acceptable excuse! The Corresponding Author (who is typically also the research supervisor of the work performed) is ultimately responsible for warranting the integrity of the content of the submitted manuscript. . .

As the editorial goes on the say, and quite rightly, if a student did indeed alter the spectrum before showing it to the boss, it's very likely because the boss was running a group whose unspoken rule was that only perfection was acceptable. And that's an invitation to fraud, large and small. I'm glad to see statements like Smith's - the only ways to keep down this sort of data manipulation are to make the rewards for it small, increase the chances of it being found out, and make the consequences for it real.

As for those, the editorial speaks only of "significant penalties". But I have some ideas for those that might help people think twice about the data clean-up process. How about a special correction in the journal, showing the altered spectra, with red circles around the parts that had been flattened out? And a copy of the same to the relevant granting agencies and department heads? That might help get the message out, you think?

As an aside, I wanted to mention that I have seen someone stand right up and take responsibility for extra peaks in an NMR. Sort of. I saw a person once presenting what was supposed to be the final product's spectrum, only there were several other singlet peaks scattered around. "What are those?" came the inevitable question. "Water" was the answer. "Umm. . .how many water peaks, exactly?" "Oh, this one is water in solution. And this one is water complexed with the compound. And this one is water adsorbed to the inside of the NMR tube. And this one is water adsorbed to the outside of the. . ." It took a little while for order to be restored at that point. . .

Comments (38) + TrackBacks (0) | Category: Analytical Chemistry | The Dark Side | The Scientific Literature

July 15, 2013

Does That Answer Your Question? Not Quite, No.

Email This Entry

Posted by Derek

John Carroll at Fierce Biotech contacted the NIH, wanting to know more about a newspaper report that the NIH had terminated dozens of clinical trials in India as new regulations come in. What sort of answer did he get? Authentic frontier gibberish, for sure, the sort of thing you'd expect from a UN press release, a State Department briefing, or other such sources of natural gas. He's trying to pressure Francis Collins and the agency to come up with something substantial, which would be most anything compared to what he's gotten so far, and I'm glad to help out in any way I can.

Comments (8) + TrackBacks (0) | Category: Clinical Trials

The Red Queen's Race For Funding

Email This Entry

Posted by Derek

Using Michael Kinsley's definition of a gaffe, as when a politician or spokesman accidentally tells the truth, I think we can put this one firmly in that category. Novogen, a small Australian company that's been though some ups and downs (mostly downs) recently raised several million dollars to continue operations. But this line in the CEO's letter to shareholders is, well. . .

". . .we need the funds to create the news flow in order to achieve market appreciation so that we can progressively raise the funds to support ongoing news flow."

There must have been a better way to phrase that, something about "making the investment community aware of the company's potential", etc. But there you have it: they're raising money to create publicity so that they can raise more money for more publicity. Their recent failure with phenoxodiol was not completely unexpected, given that the compound had been in the clinic before without success.

Thanks to this writer at the Motley Fool for picking up on this.

Comments (6) + TrackBacks (0) | Category: Business and Markets

An Update on the Wisconsin Lab Theft Case

Email This Entry

Posted by Derek

Back in April I mentioned this story, about a researcher at the Medical College of Wisconsin who'd been charged with economic espionage. The accusation was that Hua-jun Zhao had stolen an investigational oncology compound from the lab of Prof. Marshall Anderson, apparently to set up a research program with it back in China.

Now comes word that Zhao has pleaded guilty to a lesser charge, breaking into a university computer. But this is still not a good outcome for him:

Zhao, 41, initially pleaded not guilty to tampering with a private computer and lying to a federal agent. An additional charge of economic espionage was dropped but prosecutors maintained the right to renew it with a future indictment.

Instead, as part of a plea deal, Zhao pleaded guilty to a reduced charge of accessing a computer without authorization, thereby obtaining information worth at least $5,000. He faces up to five years in prison and a $250,000 fine and will be sentenced next month.

If this is what things got bargained down to, the situation must have been grim. The medical school says that it has no objection to the plea. The missing vials of compound have not been recovered, but it doesn't look like Zhao (update: fixed the name!) is going to be working on the stuff any time soon.

Comments (4) + TrackBacks (0) | Category: The Dark Side

Aveo: The Rubble Continues to Bounce Around

Email This Entry

Posted by Derek

Aveo Pharmaceuticals has, you'd think, enough problems already. Their failed attempt to get tivozanib through the FDA crashed their stock and led to a large number of their staff being laid off. But now they disclose that they've received a subpoena in the SEC. Given the sort of thing that went on during the run-up to the approval hearing, this shouldn't be too much of a surprise:

After finding out that the FDA had suggested a year ago that Aveo's late-stage work should be supplemented with a new trial, surprised analysts began to demand some answers of their own. Those questions grew more pointed as class action lawsuits began to pile up after the stock had been eviscerated in the subsequent rout.

Weeks after the review process, Aveo responded by laying off 140 staffers, 62% of its staff, including the commercial team that had been brought in after CEO Tuan Ha-Ngoc began to confidently assure investors that the company could explain the OS data and win approval.

And as that FierceBiotech piece notes, it took the company a week to disclose the SEC's inquiry. At this point, why anyone is holding this stock is something of a mystery - you have to be a very risk-tolerant investor with some long-shot money to spare. Or (more likely) you're stuck with a lot of sunk-cost Aveo stock, bought in a more hopeful era, and although you've probably written it off by now, you figure what the heck, you might be able to get a little of the money back if you let it ride. Good luck with that.

Comments (2) + TrackBacks (0) | Category: Business and Markets

July 12, 2013

Lilly Goes All In on Solanezumab

Email This Entry

Posted by Derek

So Eli Lilly is going to double down on solanezumab, their antibody treatment for Alzheimer's that did not show impressive results in earlier trials. The company is going into an even bigger Phase III, with a more carefully selected patient population, in hopes of showing a benefit.

Yikes. On one level, I sort of admire this - it's a decision that takes a lot of nerve to make, will cost a huge amount of money, and is attacking one of the most intractable clinical problems we have. But on that ever-present other hand, what are the odds? If I'm an investor in Lilly stock, am I happy about this move, or not? The only thing I can see to calm the nerves this time, if there's such a thing in an Alzheimer's clinical trial, is better diagnostic criteria from the start:

Eric Siemers, senior medical director of Lilly's Alzheimer's program, said an estimated 25 percent of patients in the two earlier Expedition trials might not actually have had beta-amyloid deposits or Alzheimer's disease, so solanezumab could not have helped them.

He said many patients were enrolled in those trials on the basis of symptoms, without undergoing sophisticated diagnostic procedures now available to confirm the presence of beta-amyloid deposits.

In the new study, Lilly's recently approved radioactive imaging agent, called Amyvid, will be used to screen patients, Siemers said. Biochemical measures in the spinal fluid can also help assess whether patients have Alzheimer's, he said.

I'll say this for them: this trial, you'd think, is going to be the answer. It's going to cost hundreds of millions by the time it's all over, but by gosh, Lilly (and the rest of us) should know if solanezumab is of any use in Alzheimer's. Unless, of course, another bath of equivocal coulda-maybe-worked numbers come out of this one, too. But that's also an answer. Under these conditions, "sort of worked" is going to mean "did not work". I don't see what else is left.

And given Lilly's patent positions and sales forecasts, it looks like they are, to a significant extent, betting the company on this. Drama, this industry could do with less drama. But we seem to be stuck with it.

Comments (31) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials

Clinical Trial Fraud Uncovered

Email This Entry

Posted by Derek

Hmmm. This article from Bloomberg says that the BMS/Pfizer anticoagulant Eliquis (apixaban), a Factor Xa inhibitor approved late last year by the FDA, was delayed for months because of misconduct in its Chinese clinical trials. (Its clinical trials had not been without incident even before this). Documents posted by the FDA have the details. Says the article:

In the Eliquis trial, Bristol-Myers hired Pharmaceutical Product Development Inc., a closely held, Wilmington, North Carolina, company known as PPD, to help oversee it.

The Eliquis trial was questioned on two issues, according to the FDA documents first cited by the journal Pharmaceutical Approvals Monthly. One was the improper manipulation of records at a study site for 35 patients at the Shanghai 9th Peoples Hospital in China. The second involved the high percentage of the 9,000 patients who were supposed to be getting Eliquis, and instead were either given the wrong drug, or the wrong dose.

There was a broad list of issues at the Shanghai hospital, according to FDA documents. They included failure to report four potential adverse medical events, late reports on three others and three medical outcomes that weren’t included in the data. Additionally, some patient names and dates were wrong, and Chinese and English records didn’t match in some cases. The FDA also reported that some patient records disappeared just ahead of a site visit by agency inspectors.

I wonder if the Bloomberg reporter was tipped off to this himself, because you have to dig into this PDF (which is one of many) to find the goods (do a search for the words "Shanghai" and "fraud"). Here are some quotes from the document itself:

Although BMS contracted with a Contract Research Organization, PPD, to provide site monitoring for ARISTOTLE, PPD did not have a presence in the People’s Republic of China when the trial was initiated in PRC; BMS initially used its own employees for monitoring. One BMS employee along with at least one other individual altered subject records after being notified the site would be inspected by OSI. OSI inspected eight clinical sites worldwide after becoming aware of this action. Additionally, after errors in dispensing study drug became an issue, BMS and PPD, a CRO involved in conducting and monitoring ARISTOTLE, were inspected specifically to review the issue of trial oversight and monitoring. OSI concludes that the study appears to have been conducted and monitored adequately. They did recommend that data from sites in China be excluded because the employee who committed the GCP violation in China was involved in the conduct of the trial at all Chinese sites.

This came to light because a contract worker went to his or her supervisors with a problem: this person had been asked to change data and documentation on a hard drive before an FDA inspection, and the supervisor making the request (who was later fired) had worked at 18 other trial locations in China. This led the FDA, naturally enough, to say that it was worried about what else might have been going on, and to complain about broad problems with oversight.

As shown in the FDA documents, the agency went on to run the data with that specific site excluded, and then with all the other Chinese site data excluded, and the analysis still came out in favor of apixaban (although not as robustly in some categories). So the approval of the drug seems to have been the right call; the conclusions of the trial don't seem to have been switched by the misconduct. Still, you don't want this sort of thing.

Elliot Levy of BMS is quoted several times in the Bloomberg article, generally playing down the problems mentioned by the FDA: "not exceptional", "appropriately documented and reported", and so on. But if everything was normal, why did things stall for nine months? The lead outside investigator on the trial, Christopher Granger of Duke, has a different perspective:

“There is a greater likelihood of some of this impropriety in certain regions,” Granger said in a telephone interview. “We’ve had experiences in India and China where we’ve had more than we would have expected.”

Unfortunately, I think that's a fair assessment. But it doesn't have to be that way. There are vast numbers of ethical, hard-working scientists and staff in both India and China; it's not like these entire countries are full of cheaters and corner-cutters. But international companies go to these countries to get work done for lower cost, so the incentives are there to keep those costs down by whatever means come to hand. There are underhanded shortcutters in every country in the world, but some business environments give these people more scope to exercise their talents.

I'm actually glad when this sort of thing comes to light. Although it's not like Bristol-Myers Squibb or Lilly were rushing to do that, were they? I think that the only way to clean up this kind of behavior is to make it public, so that it has as many consequences as possible. If a country's reputation for doing fast, cost-effective clinical trials is compromised by a reputation for regulatory trouble and unreliable data, well, that's another set of incentives at work, but this time in the right direction. Throwing a towel over these incidents does no one any good in the long run. Make it public; make it sting.

Comments (10) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | The Dark Side

Comment Troubles

Email This Entry

Posted by Derek

As many of you have discovered, something is currently hosed up with the commenting function on the site. It's a glitch, and behind-the-scenes work is underway. So if you get some sort of error message when you try to leave a comment, hold those thoughts and try again later - thanks!

Comments (2) + TrackBacks (0) | Category: Blog Housekeeping

July 11, 2013

More From Warp Drive Bio (And Less From Aileron?)

Email This Entry

Posted by Derek

There hasn't been much news about Warp Drive Bio since their founding. And that founding was a bit of an unusual event all by itself, since the company was born with a Sanofi deal already in place (and an agreement for them to buy the company if targets were met). But now things seem to be happening. Greg Verdine, a founder, has announced that he's taking a three-year leave of absence from Harvard to become the company's CEO. They've also brought in some other big names, such as Julian Adams (Millennium/Infinity) to be on the board of directors.

The company has a very interesting research program: they're hoping to coax out cryptic natural products from bacteria and the like, molecules that aren't being found in regular screening efforts because the genes used in their biosynthetic pathways are rarely activated. Warp Drive's plan is to sequence heaps of prokaryotes, identify the biosynthesis genes, and activate them to produce rare and unusual natural products as drug candidates. (I'm reminded of this recent work on forcing fungi to produce odd products by messing with their epigenetic enzymes, although I'm not sure if that's what Warp Drive has in mind specifically). And the first part of that plan is what the company has been occupying itself with over the last few months:

“These are probably really just better molecules, and always were better,” he says. “The problems were that they took too long to discover and that one was often rediscovering the same things over and over again.”

Verdine explains the reason this happened is because many of the novel genes in the bacteria aren’t expressed, and remain “dark,” or turned off, and thus can’t be seen. By sequencing the microbes’ genetic material, however, Warp Drive can illuminate them, and find the roadmap needed to make a number of drugs.

“They’re there, hiding in plain sight,” Verdine says.

Over the past year and a half, Warp Drive has sequenced the entire genomes of more than 50,000 bacteria, most of which come from dirt. That library represents the largest collection of such data in existence, according to Verdine.

The entire genomes of 50,000 bacteria? I can well believe that this is the record. That is a lot of data, even considering that bacterial genomes don't run that large. My guess is that the rate-limiting step in all this is going to be a haystack problem. There are just so many things that one could potentially work on - how do you sort them out? Masses of funky natural product pathways (whose workings may not be transparent), producing masses of funky natural products, of unknown function: there's a lot to keep people busy here. But if there really is a dark-matter universe of natural products, it really could be worth exploring - the usual one certainly has been a good thing over the years, although (as noted above) it's been suffering from diminishing returns for a while.

But there's something else I wondered about when Warp Drive was founded: Verdine himself has been involved in founding several other companies, and there's another one going right here in Cambridge: Aileron Therapeutics, the flagship of the stapled-peptide business (an interesting and sometimes controversial field). How are they doing? They recently got their first compound through Phase I, after raising more money for that effort last year.

The thing is, I've heard from more than one person recently that all isn't well over there, that they're cutting back research. I don't know if that's the circle-the-wagons phase that many small companies go through when they're trying to take their first compound through the clinic, or a sign of something deeper. Anyone with knowledge, feel free to add it in the comments section. . .

Update: Prof. Verdine emails me to note that he's officially parted ways with Aileron since 2010, to avoid conflicts of interest with his other venture capital work. His lab has continued to investigate stapled peptides on their own, though.

Comments (14) + TrackBacks (0) | Category: Biological News | Business and Markets | Natural Products

The Last PPAR Compound?

Email This Entry

Posted by Derek

Roche has announced that they're halting trials of aleglitazar, a long-running investigational drug in their diabetes portfolio. I'm noting this because I think that this might be the absolute last of the PPAR ligands to fail in the clinic. And boy howdy, has it been a long list. Merck, Lilly, Kyorin, Bristol-Myers Squibb, Novo Nordisk, GlaxoSmithKline, and Bayer are just the companies I know right off the top of my head that have had clinical failures in this area, and I'm sure that there are plenty more. Some of those companies (GSK, for sure) have had multiple clinical candidates go down, so the damage is even worse than it appears.

That why I nominated this class in the Clinical Futility Awards earlier this summer. Three PPAR compounds actually made it to market, but the record has not been happy there, either. Troglitazone was pulled early, Avandia (rosiglitazone) has (after a strong start) been famously troubled, and Actos (pioglitazone) has its problems, too.

The thing is, no one knows about all this, unless they follow biomedical research in some detail. Uncounted billions have been washed through the grates; years and years of work involving thousands of people has come to nothing. The opportunity costs, in retrospect, are staggering. So much time, effort, and money could have been spent on something else, but there was no way to know that without spending it all. There never really is.

I return to this theme around here every so often, because I think it's an important one. The general public hears about the drugs that we get approved, because we make a big deal out of them. But the failures, for the most part, are no louder than the leaves falling from the trees. They pass unnoticed. Most people never knew about them at all, and the people who did know would rather move on to something else. But if you don't realize how many of these failures there are, and how much they cost, you can get a completely mistaken view of drug discovery. Sure, look at the fruit on the branches, on those rare occasions when some appears. But spare a glance at that expensive layer of leaves on the ground.

Comments (31) + TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Drug Development

July 10, 2013

Good Fortune Smiles On Someone's Onyx Option Trades

Email This Entry

Posted by Derek

This will be interesting to follow: the recent offer by Amgen for Onyx Pharmaceuticals (which happened while I was traveling, and didn't get a chance to write about) has had a financial sidelight: someone got very, very lucky with some nearly-expired call options.

On Thursday and Friday, traders enacted a few small-sized trades on Onyx call options - which give the buyer the right to buy the stock at a given price by a certain date - hoping to catch a share price rally by mid-July.

Onyx averaged 715 calls per day over the past 22 trading days, according to options analytics firm Trade Alert.

Call volume was notable on Friday, when a total of 1,561 calls changed hands, more than double the normal level, against 488 puts. On Thursday, traders exchanged 1,374 calls and 664 puts on Onyx, data from Trade Alert showed.

"This flow looks a bit suspect to me. It's possible the buyers knew of the deal and put that knowledge to work," said Trade Alert President Henry Schwartz. "The odds of turning a few hundred thousand dollars into millions overnight are very small, yet that's exactly what happened in Onyx options last week."

Specifically, some of these were July call options, worth a dollar or two on the Friday before the announcement, which started off Monday at about $30. That's just the sort of thing that speculative options traders dream about, and it's also just the sort of trade that the SEC likes to investigate. I wish good luck to whoever it is that has to explain this activity; they're going to need it in order to persuade anyone that good luck was all that was involved.

Comments (8) + TrackBacks (0) | Category: Business and Markets

On the Priority Breakthrough Accelerated Fast Track

Email This Entry

Posted by Derek

So the FDA has the good ol' drug approval process. And then there's Priority Review, and Fast Track, and Accelerated Approval, and now the Breakthrough designation. What, exactly, do all these things mean, and how are they different?

Matthew Herper has a good overview here at Forbes. In short, Priority Review is supposed to take a few months off the usual review period. Fast Track is for drugs that target some unmet medical need, and speeds up their review as well. Accelerated Approval is the process for approving important unmet-need drugs based on preliminary data, with a review once larger studies are completed. (That, for example, is what Avastin went through for its onetime breast cancer indication). Note that these categories aren't mutually exclusive; a drug can have more than one at a time.

And the new "Breakthrough" category is similar to Fast Track and Accelerated Approval, in that it's supposed to be for drugs whose early clinical evidence shows that they might be a real advance over existing treatments. According to Herper's interview with Richard Pazdur, of the FDA's Office of Oncology and Hematology Products, the big difference in this latest category is early and broad cooperation with the agency.

. . .(the designation) catalyzes communication between a company and the FDA. Traditionally, drug reviews take place through a series of scheduled meetings. A breakthrough designation means there are more times a company can expect to be able to pick up the phone and get an answer. The designation can lead to cleared calendars, and it also means that the senior management of the FDA division becomes involved, not just the reviewers who serve on the FDA’ s front lines.

“The true measure of success is going to be how active we are in working with the companies,” Pazdur says. “If someone gets a breakthrough therapy and it’s business as usual, than the breakthrough therapy is meaningless.”

And these communication isn't just about trial design, although that's one of the biggest issues. Manufacturing, naming, and all the other regulatory issues are treated as well. Pazdur says that the biggest reason that some companies haven't achieved breakthrough status for their new compounds, though, is that they come in when it's still too early to make a decision. It sounds like this is something companies have to time pretty well: too early, and you'll be told to come back later. But if you wait too long, the designation might not be as much help as it could be.

Here's an overview from the FDA, and here's ta more detailed guidance document. The FDA says that just over half the drugs approved in 2012 took advantage of one or more of these categories, and it looks like the trend will continue. If the majority of things get Special Expedited Priority Shipping, what does that say about the Regular Shipping option? An outside observer (or investor) should keep this in mind - you probably shouldn't celebrate when a drug gets a designation like this, as worry about when it doesn't.

Comments (6) + TrackBacks (0) | Category: Regulatory Affairs

July 9, 2013

A Microwave Argument

Email This Entry

Posted by Derek

Since I was talking about microwave heating of reactions here the other week, I wanted to mention this correspondence in Angewandte Chemie. Oliver Kappe is the recognized expert on microwave heating in chemistry, and recently published an overview of the topic. One of the examples he cited was a report of some Friedel-Crafts reactions that were accelerated by microwave heating. The authors did not take this very well, and fired back with a correspondence in Ang. Chem., clearly feeling that their work had been mistreated in Kappe's article. They never claimed to be seeing some sort of nonthermal microwave effect, they say, and resent the implication that they were.

Kappe himself has replied now, and seems to feel that Dudley et al. are trying to have things both ways:

In their Correspondence, Dudley and co-workers have suggested that we attempt to impugn their credibility by associating their rationalization for the observed effect with the concept of nonthermal microwave effects. This is clearly not the case. On the contrary, we specifically state in the Essay that “The proposed effect perhaps can best be classified as a specific microwave effect involving selective heating of a strongly microwave-absorbing species in a homogeneous reaction mixture (”molecular radiators).“ As we have already pointed out, our Essay was mainly intended to provide an overview on the current state-of-affairs regarding microwave chemistry and microwave effects research. Not surprisingly, therefore, out of the incriminated 22 uses of the word ”nonthermal“ in our Essay, this word was used only twice in reference to the Dudley chemistry, and in both of these instances in conjunction with the term ”specific microwave effect“.

The confusion perhaps arises since in the original publication by Dudley, the authors provide no clear-cut classification (thermal, specific, nonthermal) of the microwave effect that they have observed. In fact, they do not unequivocally state that they believe the effect is connected to a purely thermal phenomenon, but rather invoke arguments about molecular collisions and the pre-exponential factor A in the Arrhenius equation (for example: “Chemical reactions arise from specific molecular collisions, which typically increase as a function of temperature but also result from incident microwave irradiation”). Statements like this that appear to separate a thermal phenomenon from a microwave irradiation event clearly invite speculation by non-experts about the involvement of microwave effects that are not purely thermal in nature. This is very apparent by the news feature in Chemistry World following the publication of the Dudley article entitled: “Magical microwave effects revived. Microwaves can accelerate reactions without heating”

Based on his own group's study of the reaction, Kappe believes that what's going on is local superheating of the solvent, not something more involved and/or mysterious. His reply is a lengthy, detailed schooling in microwave techniques - why the stated power output of a microwave reactor is largely meaningless, the importance (and difficulty) of accurate temperature measurements, and the number of variables that can influence solvent superheating. The dispute here seems to be largely a result of the original paper trying to sound coy about microwave effects - if they'd played things down a bit, I don't think this whole affair would have blown up.

But outside of this work, on the general topic of nonthermal microwave reaction effects, I side with Kappe (and, apparently, so do Dudley and co-authors). I haven't seen any convincing evidence for microwave enhancement of reactions that doesn't come down to heating (steep gradient, localized superheating, etc.)

Comments (13) + TrackBacks (0) | Category: Chemical News

Non-Reproducible Science: A Survey

Email This Entry

Posted by Derek

The topic of scientific reproducibility has come up around here before, as it deserves to. The literature is not always reliable, and it's unreliable for a lot of different reasons. Here's a new paper in PLOS ONE surveying academic scientists for their own experiences:

To examine a microcosm of the academic experience with data reproducibility, we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction.

Yeah, I'll bet they were. It turns out that about half the authors who had been contacted about problems with a published paper responded "negatively or indifferently", according to the survey respondents. As to how these things make it into the literature in the first place, I don't think that anyone will be surprised by this part:

Our survey also provides insight regarding the pressure to publish in order to maintain a current position or to promote ones scientific career. Almost one third of all trainees felt pressure to prove a mentor's hypothesis even when data did not support it. This is an unfortunate dilemma, as not proving a hypothesis could be misinterpreted by the mentor as not knowing how to perform scientific experiments. Furthermore, many of these trainees are visiting scientists from outside the US who rely on their trainee positions to maintain visa status that affect themselves and their families in our country.

And some of these visiting scientists, it should be noted, come from backgrounds in authority-centered and/or shame-based cultures, where going to the boss with the news that his or her big idea didn't work is not a very appealing option. It's not for anyone, naturally, but it's especially hard if you feel that you're contradicting the head of the lab and bringing shame on yourself in the process.

As for what to do about all this, the various calls for more details in papers and better reviewing are hard to complain about. But while I think that those would help, I don't see them completely solving the problem. This is a problem of human nature; as long as science is done by humans, we're going to have sloppy work all the way up to outright cheating. What we need to do is find ways to make it harder to cheat, and less rewarding - that will at least slow it down a bit.

There will always be car thieves, too, but we don't have to make it easy for them, either. Some of our publishing practices, though, are the equivalent of habitually walking away with the doors unlocked and the keys in the ignition. Rewarding academic scientists (at all levels) so directly for the number of their publications is one of the big ones. Letting big exciting results through without good statistical foundations is another.

In this vein, a reader sends along the news that the Reproducibility Initiative is now offering grants for attempts to check big results in the literature. That's the way to get it done, and I'm glad to see some money forthcoming. This effort is concentrating on experimental psychology, which is appropriate, given that the field has had some recent scandals (follow-up here) and is now in a big dispute over the reproducibility of even its honestly-meant data. They need all the help they can get over there - but I'll be glad to see some of this done over here in the biomedical field, too.

Comments (16) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

July 8, 2013

19 Years to a Retraction. Bonus Midnight Camera Footage Included.

Email This Entry

Posted by Derek

This Retraction Watch post details the longest correction/retraction saga I've heard of yet. A 1994 paper in Nature has finally been pulled back, after years and years of wrangling. And by "wrangling" I mean multiple attempted repeats, blinded samples, fraught exchanges over scientific ethics with one of the most high-profile professors in the Czech Republic and hidden camera footage from the lab freezer. Yep, it got to that point - midnight break-ins to alter the stored samples. Read the post for more; it's really something.

Comments (4) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

The Norbornyl Cation Structure (Really)

Email This Entry

Posted by Derek

Here it is 2013, and the last shot has just now been fired in the norbornyl cation debate. I'm too young to have lived through that one, although it was still echoing around as I learned chemistry. But now we have a low-temperature crystal structure of the ion itself, and you know what? It's nonclassical. Winstein was right, Olah and Schleyer were right, and H. C. Brown was wrong.

Everyone's been pretty sure of that for a long time now, of course. But that article from Chemistry World has a great quote from J. D. Roberts, who is now an amazing 95 years old and goes back a long, long way in this debate. He's very happy to see this new structure, but says that it still wouldn't have fazed H. C. Brown: "Herb would be Herb no matter what happened", he says, and from everything I've heard about him, that seems accurate.

Comments (28) + TrackBacks (0) | Category: Chemical News

Suing Your Grad School, And Your Professor

Email This Entry

Posted by Derek

As anyone who's negotiated with them knows, Harvard plays hardball when it comes to patent rights. But so do the university's students, apparently. C&E News has a report on Mark Charest, a former graduate student in the Myers lab, who is suing the university over patent royalties.

Myers, Charest, and others reported a new synthetic route into the tetracycline antibiotics, and this led to a new company (Tetraphase), which is developing these in the clinic. The dispute is over how the royalties are divided up: Charest, in his legal complaint, claims that the university forced him in 2006 to take a lower share than he considered his due, and he further claims that the university reduced his share even further in 2009.

Note that all of these disputes are over the scraps: Harvard is taking 65% of the royalties right off the top, and no one's going to be reducing that. And I'm not sure how far Charest is going to get with this lawsuit: the article says that an independent panel was called on at one point to review his contributions, so whether he liked the terms he was given or not, they've been scrutinized and he is presumably on record as having agreed to them.

It looks like he's going to claim that this agreement was made under duress and/or under false pretences, though. ChemBark has more details, including statements by Charest in his complaint (link via Paul at ChemBark) that he felt threatened both by Prof. Myers and by Harvard's technology transfer office, and is also alleging fraud (Halvorsen, below, is with Harvard's Office of Technology Development):

74. Dr. Halvorsen threatened that he would award all the inventors an equal 20% share, but that he would allocate 50% of the Inventor Royalties to a wholly separate, undisclosed patent application on which Dr. Charest was not an inventor (the “undisclosed patent application”).
75. Dr. Charest understood Dr. Halvorsen to be threatening him; he wrote to Dr. Halvorsen that “[i]n your previous email you issued the written warning that my portion of the inventor allocation would be reduced if I proceed forward.”
76. Dr. Halvorsen used this separate, undisclosed patent application to force Dr. Charest to take OTD’s offer.
77. The undisclosed patent application, however, was, on information and belief, filed after financial terms were agreed to between Harvard and Tetraphase and added to the license between Harvard and Tetraphase just prior to finalization of their license agreement.
78. Dr. Charest only later learned that this separate, undisclosed patent application was only a ruse to force Dr. Charest to sign OTD’s offer.

No such patent application ever published, the document says. Much of the complaint also focuses on Harvard's decision to give 50% of the inventor royalties to Myers, dividing up the rest between the students and/or postdocs on the patent, and claims that this is a violation of the university's stated policies. So there's no way that this cannot get ugly - it's gotten ugly already. My guess is that Harvard will do whatever it can to get this thrown out (naturally), but if they're unsuccessful in that, that there will be some sort of out-of-court settlement. I really don't see them signing up to have all this dragged though the courts (and the public record) - even if the university did nothing wrong (and I'm agnostic about that), there's still no upside for them.

So for anyone out there whose grad school experience was a bit on the rough side, take heart: at least it didn't end up in court. Updates on this case as it slowly drags itself through the legal system.

Comments (36) + TrackBacks (0) | Category: Graduate School | Patents and IP

July 4, 2013

A Fourth of July Recipe: Pork Tenderloin and Sour Onion Salsa

Email This Entry

Posted by Derek

In keeping with tradition around here, I wanted to put up a recipe for the holiday. It's pretty hot out there for standing next to a grill (but I'll be doing that later today anyway!) Here's one that gets made around here at Pipeline Headquarters fairly often. It's not something that can be whipped up quickly (it needs some marinating time), but maybe for the coming weekend. The pork tenderloin recipe is similar to many others floating around, and can be added to and adapted as needed. The onion salsa is adapted from a Steve Raichlen recipe in The Barbecue Bible, a book I've had a very high success rate with.

Grilled Pork Tenderloin
Here are the quantities for marinating one pork tenderloin (400 to 500 grams / circa one pound). You can adjust to suit your needs, of course:

40g salt (see Note 1)
50g brown sugar (or three tablespoons)
15g Dijon mustard (one tablespoon)
Two cups water
One pod of star anise
1g whole black peppercorns (1/2 teaspoon)
Two bay leaves

Dissolve the salt, sugar, and mustard in the water. Crush the anise pod and peppercorns (mortar and pestle if you have one, whacked in foil or plastic wrap if not) and add them and the bay leaves to the mix. Soak the pork tenderloin in this brine (plastic bag or covered bowl) for several hours in a refrigerator - overnight is good.

Remove the pork from the treatment vat and grill it over high heat for ten minutes, turning it to brown the surface. Then reduce the heat, or move it to a less directly heated part of your grill, and cook it there until its juices run clear. (See Note 2). Let the meat rest off the grill for a few minutes, then slice and serve.

Note 1: this is in the "2 or three tablespoons" range of something like Morton kosher salt, but salts vary tremendously in density. Notoriously, the two leading brands of kosher salt in the US, Morton's and Diamond Crystal, are off by nearly a factor of two, a conversion which has led many to grief. Table salt is denser still - see the link.

Note 2: These are not very scientific directions, but grilling is not a very scientific form of cooking - everyone's heat source is different, and things are hard to quantify. The brining treatment will generally keep this meat from drying out so quickly - a good think, since unbrined pork tenderloin can get that way quite easily. But you'll need to use your own judgment here. If you're not grilling this, you can brown the outside in a hot oiled pan and then bake it, or carefully broil it, with frequent turning, to achieve a similar effect.


Sour Onion Salsa

1 large red onion
125 mL fresh lime juice
125 mL orange juice
12 g salt (two teaspoons of table salt)

Peel the onion and cut a slice off the stem end. Place that flat side down and cut the onion into six or eight wedges. Grill these on both sides (the root end will hold them together as they cook) until they're somewhat charred. Remove them from the grill, let them cool a bit, then trim off the root ends and add the salt and citrus juices. Let these marinate at least a half hour at room temperature, stirring every so often.

Comments (10) + TrackBacks (0) | Category: Blog Housekeeping

July 2, 2013

A Quick Traffic Update

Email This Entry

Posted by Derek

I'm traveling today, and have no time for a blog post, but I wanted to thank everyone for another record month of traffic around here. Nearly 820,000 page views beats even the previous outlier, back in April when I was linked to by xkcd.com. And I owe that, of course, to those Eight Toxic Food Additives, a post that got picked up all over the place (and is still echoing around). But I also owe that figure, naturally, to everyone who's taken the time to stop by - so thanks!

Comments (6) + TrackBacks (0) | Category: Blog Housekeeping

July 1, 2013

Travel and Upcoming Posts

Email This Entry

Posted by Derek

I'm traveling for the next couple of days, and then we have the July 4th holiday coming up. So blogging will be a bit irregular around here this week. By popular demand, I am planning a couple of large posts that follow up on the "Eight Toxic Foods" craziness, though. One will look at why some of the allowed US ingredients are banned in some other countries, and the other will look at the reverse: ingredients and additives that are banned in the US, but allowed abroad. Those will start showing up next week.

Comments (9) + TrackBacks (0) | Category: Blog Housekeeping

Corroboration for ENCODE?

Email This Entry

Posted by Derek

Another cannon has gone off in the noncoding-genome wars. Here's a paper in PLOS Genetics detailing what the authors are calling Long Intergenic Noncoding RNAs (lincRNAs):

Known protein coding gene exons compose less than 3% of the human genome. The remaining 97% is largely uncharted territory, with only a small fraction characterized. The recent observation of transcription in this intergenic territory has stimulated debate about the extent of intergenic transcription and whether these intergenic RNAs are functional. Here we directly observed with a large set of RNA-seq data covering a wide array of human tissue types that the majority of the genome is indeed transcribed, corroborating recent observations by the ENCODE project. Furthermore, using de novo transcriptome assembly of this RNA-seq data, we found that intergenic regions encode far more long intergenic noncoding RNAs (lincRNAs) than previously described, helping to resolve the discrepancy between the vast amount of observed intergenic transcription and the limited number of previously known lincRNAs. In total, we identified tens of thousands of putative lincRNAs expressed at a minimum of one copy per cell, significantly expanding upon prior lincRNA annotation sets. These lincRNAs are specifically regulated and conserved rather than being the product of transcriptional noise. In addition, lincRNAs are strongly enriched for trait-associated SNPs suggesting a new mechanism by which intergenic trait-associated regions may function.

Emphasis added, because that's been one of the key points in this debate. The authors regard the ENCODE data as "firmly establishing the reality of pervasive transcription", so you know where their sympathies lie. And their results are offered up as a strong corroboration of the ENCODE work, with lincRNAs serving as the, well, missing link.

One thing I notice is that these new data strongly suggest that many of these RNAs are expressed at very low levels. The authors set cutoffs for "fragments per kilobase of transcript per million mapped reads" (FPKM), discarding everything that came out as less than 1 (roughly one copy per cell). The set of RNAs with FPKM>1 is over 50,000. If you ratchet up a bit, things drop off steeply, though. FPKM>10 knocks that down to between three and four thousand, and FPKM>30 give you 925 lincRNAs. My guess is that those are where the next phase of this debate will take place, since those expression levels get you away from the noise. But the problem is that the authors are explicitly making the case for thousands upon thousands of lincRNAs being important, and this interpretation won't be satisfied with everyone agreeing on a few hundred new transcripts. These things also seem to be very tissue-specific, so it looks like the arguing is going to get very granular indeed.

Here's a quote from the paper that sums up the two worldviews that are now fighting it out:

Almost half of all trait-associated SNPs (TASs) identified in genome-wide association studies are located in intergenic sequence while only a small portion are in protein coding gene exons. This curious observation points to an abundance of functional elements in intergenic sequence.

Or that curious observation could be telling you that there's something wrong with your genome-wide association studies. I lean towards that view, but the battles aren't over yet.

Comments (25) + TrackBacks (0) | Category: Biological News