Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

April 15, 2014

Novartis Gets Out of RNAi

Email This Entry

Posted by Derek

Yesterday brought the sudden news that Novartis is pulling their RNA interference research work. The company is citing difficulties in development, and also the strategic point that not as many disease areas seem to be open to the use of the technique as they'd like. John LaMattina has more here - it's looking more and more like this may be a good field for smaller companies like Alnylam, but not something that's going to feed the beast at a company the size of Novartis (or Merck, who exited a while ago). If there's some sort of technology breakthrough, that could change - but you get the impression that Novartis was hoping for one before now.

Comments (6) + TrackBacks (0) | Category: Business and Markets

Total Synthesis in Flow

Email This Entry

Posted by Derek

Ley%20flow.pngSteve Ley and co-workers have published what is surely the most ambitious flow-chemistry-based total synthesis ever attempted. Natural products spirodienal A and spirangien A methyl ester are prepared with almost every step (and purification) being done in flow mode.

The scheme shown (for one of the intermediates) will give you the idea. There are some batch-mode portage steps, such as 15 to 16, mainly because of extended reaction times that weren't adaptable to flow conditions. But the ones that could be adapated were, and it seems to have helped out with the supply of intermediates (which is always a tedious job in total synthesis, because you're either bored, when things are working like they always do, or pissed off, because something's gone wrong). Aldehyde 11 could be produced from 10 at a rate of 12 mmol/hour, for example.

The later steps of the synthesis tend much more towards batch mode, as you might imagine, since they're pickier (and not run as many times, either, I'll bet, compared to the number of times the earlier sequences were). Flow is perfect for those "Make me a pile of this stuff" situations. Overall, this is impressive work, and demonstrates still more chemistry that can be adapted usefully to flow conditions. Given my attitude towards total synthesis, I don't care much about spirodienal A, but I certainly do care about new ways to make new compounds more easily, and that's what this paper is really aiming for.

Comments (9) + TrackBacks (0) | Category: Chemical News

Sweet Reason Lands On Its Face

Email This Entry

Posted by Derek

This study has implications for many fields of science where its practitioners keep running into rumor and conspiracy theories. The authors tried several different means to increase the uptake of the MMR vaccine (information about the lack of connection with autism, information about the severity of the diseases being pervented, case histories of children who'd had them, and so on), and compared them to see if anything helped with parent who were skeptical of having their children vaccinated.

You can probably guess: none of these helped at all. In fact, several of the interventions appeared to make things even worse, reinforcing beliefs in the dangers of vaccination. There's a general principle at work here, which I've heard stated as "You can't use reason to talk someone out of a position that they didn't arrive at by reason". It's the wrong tool for the job, like using a screwdriver to pull nails. I'd also note that people who are suspicious of vaccines are also likely to be alert to signs that someone is trying to convince them otherwise, and will react accordingly. They know that their position is a minority one - that's part of the attraction, in many cases.

"Here, read this pamphlet from the CDC" is a strategy with no hope whatsoever of working. The case-history approach was probably a better idea, but just the fact that it's coming from some official medical source is enough, in these cases, to discredit it completely. That's what they want you to think. In the context of this blog, I run into this sort of thinking most often in the form of "Big Pharma doesn't want to cure anything", or even "Big Pharma knows how to cure cancer, but doesn't want to tell anyone because it would hurt their profits". The only way I've ever made any headway with that one (and it hasn't been very often) is when I've had a chance to go one-on-one with a believer. Looking someone in the eye and asking them if they really are accusing me of watching some of my family members die from diabetes, cancer, and heart disease while I was hiding the cures and collecting my paycheck is an uncomfortable conversation, but I've had it a few times. The only counterattack has been that no, they're not saying that I personally have these things in my desk drawer, it's the higher-ups, you know, them. "So how have I been working on these diseases for 25 years without rediscovering any of these cures?" I ask, and that generally winds things up. But I like to think (or to kid myself) that I've planted a slight seed of doubt.

You need as much conviction in your voice as the quacks have, though, and that's not easy, because they have a lot. Science has the evidence on its side, naturally, and that's a lot, but conspiracy theorists and their friends have something to believe in, and that's a very strong part of human nature indeed. It is not satisfied by contemplating charts or tables; it does not find fulfillment in double-blinded trials. It provides a ward against fear, the comfort of knowing secrets that others don't, and a fellowship of like-minded believers. In many cases, when you're trying to persuade someone out of these views, you're not just trying to argue a specific point - you're trying to talk them out of an entire worldview. CDC pamphlets don't stand a chance.

Comments (35) + TrackBacks (0) | Category: Snake Oil

April 14, 2014

More on the Science Chemogenomic Signatures Paper

Email This Entry

Posted by Derek

phenol%20equil.png
This will be a long one. I'm going to take another look at the Science paper that stirred up so much comment here on Friday. In that post, my first objection (but certainly not my only one) was the chemical structures shown in the paper's Figure 2. A number of them are basically impossible, and I just could not imagine how this got through any sort of refereeing process. There is, for example, a cyclohexadien-one structure, shown at left, and that one just doesn't exist as such - it's phenol, and those equilibrium arrows, though very imbalanced, are still not drawn to scale.
subst%20align.png
Well, that problem is solved by those structures being intended as fragments, substructures of other molecules. But I'm still positive that no organic chemist was involved in putting that figure together, or in reviewing it, because the reason that I was confused (and many other chemists were as well) is that no one who knows organic chemistry draws substructures like this. What you want to do is put dashed bonds in there, or R groups, as shown. That does two things: it shows that you're talking about a whole class of compounds, not just the structure shown, and it also shows where things are substituted. Now, on that cyclohexadienone, there's not much doubt where it's substituted, once you realize that someone actually intended it to be a fragment. It can't exist unless that carbon is tied up, either with two R groups (as shown), or with an exo-alkene, in which case you have a class of compounds called quinone methides. We'll return to those in a bit, but first, another word about substructures and R groups.
THF%20R%20group.png
Figure 2 also has many structures in it where the fragment structure, as drawn, is a perfectly reasonable molecule (unlike the example above). Tetrahydrofuran and imidazole appear, and there's certainly nothing wrong with either of those. But if you're going to refer to those as common fragments, leading to common effects, you have to specify where they're substituted, because that can make a world of difference. If you still want to say that they can be substituted at different points, then you can draw a THF, for example, with a "floating" R group as shown at left. That's OK, and anyone who knows organic chemistry will understand what you mean by it. If you just draw THF, though, then an organic chemist will understand that to mean just plain old THF, and thus the misunderstanding.

If the problems with this paper ended at the level of structure drawing, which many people will no doubt see as just a minor aesthetic point, then I'd be apologizing right now. Update: although it is irritating. On Twitter, I just saw that someone spotted "dihydrophyranone" on this figure, which someone figured was close enough to "dihydropyranone", I guess, and anyway, it's just chemistry. But they don't. It struck me when I first saw this work that sloppiness in organic chemistry might be symptomatic of deeper trouble, and I think that's the case. The problems just keep on coming. Let's start with those THF and imidazole rings. They're in Figure 2 because they're supposed to be substructures that lead to some consistent pathway activity in the paper's huge (and impressive) yeast screening effort. But what we're talking about is a pharmacophore, to use a term from medicinal chemistry, and just "imidazole" by itself is too small a structure, from a library of 3200 compounds, to be a likely pharmacophore. Particularly when you're not even specifying where it's substituted and how. There are all kinds of imidazole out there, and they do all kinds of things.
four%20imidazoles.png
So just how many imidazoles are in the library, and how many caused this particular signature? I think I've found them all. Shown at left are the four imidazoles (and there are only four) that exhibit the activity shown in Figure 2 (ergosterol depletion / effects on membrane). Note that all four of them are known antifungals - which makes sense, given that the compounds were chosen for the their ability to inhibit the growth of yeast, and topical antifungals will indeed do that for you. And that phenotype is exactly what you'd expect from miconazole, et al., because that's their known mechanism of action: they mess up the synthesis of ergosterol, which is an essential part of the fungal cell membrane. It would be quite worrisome if these compounds didn't show up under that heading. (Note that miconazole is on the list twice).
other%20imidazoles.png
But note that there are nine other imidazoles that don't have that same response signature at all - and I didn't even count the benzimidazoles, and there are many, although from that structure in Figure 2, who's to say that they shouldn't be included? What I'm saying here is that imidazole by itself is not enough. A majority of the imidazoles in this screen actually don't get binned this way. You shouldn't look at a compound's structure, see that it has an imidazole, and then decide by looking at Figure 2 that it's therefore probably going to deplete ergosterol and lead to membrane effects. (Keep in mind that those membrane effects probably aren't going to show up in mammalian cells, anyway, since we don't use ergosterol that way).

There are other imidazole-containing antifungals on the list that are not marked down for "ergosterol depletion / effects on membrane". Ketonconazole is SGTC_217 and 1066, and one of those runs gets this designation, while the other one gets signature 118. Both bifonazole and sertaconazole also inhibit the production of ergosterol - although, to be fair, bifonazole does it by a different mechanism. It gets annotated as Response Signature 19, one of the minor ones, while sertaconazole gets marked down for "plasma membrane distress". That's OK, though, because it's known to have a direct effect on fungal membranes separate from its ergosterol-depleting one, so it's believable that it ends up in a different category. But there are plenty of other antifungals on this list, some containing imidazoles and some containing triazoles, whose mechanism of action is also known to be ergosterol depletion. Fluconazole, for example, is SGTC_227, 1787 and 1788, and that's how it works. But its signature is listed as "Iron homeostasis" once and "azole and statin" twice. Itraconzole is SGTC_1076, and it's also annotated as Response Signature 19. Voriconazole is SGTC_1084, and it's down as "azole and statin". Climbazole is SGTC_2777, and it's marked as "iron homeostasis" as well. This scattering of known drugs between different categories is possibly and indicator of this screen's ability to differentiate them, or possibly an indicator of its inherent limitations.

Now we get to another big problem, the imidazolium at the bottom of Figure 2. It is, as I said on Friday, completely nuts to assign a protonated imidazole to a different category than a nonprotonated one. Note that several of the imidazole-containing compounds mentioned above are already protonated salts - they, in fact, fit the imidazolium structure drawn, rather than the imidazole one that they're assigned to. This mistake alone makes Figure 2 very problematic indeed. If the paper was, in fact, talking about protonated imidazoles (which, again, is what the authors have drawn) it would be enough to immediately call into question the whole thing, because a protonated imidazole is the same as a regular imidazole when you put it into a buffered system. In fact, if you go through the list, you find that what they're actually talking about are N-alkylimidazoliums, so the structure at the bottom of FIgure 2 is wrong, and misleading. There are two compounds on the list with this signature, in case you were wondering, but the annotation may well be accurate, because some long-chain alkylimidazolium compounds (such as ionic liquid components) are already known to cause mitochondrial depolarization.

But there are several other alkylimidazolium compounds in the set (which is a bit odd, since they're not exactly drug-like). And they're not assigned to the mitochondrial distress phenotype, as Figure 2 would have you think. SGTC_1247, 179, 193, 1991, 327, and 547 all have this moeity, and they scatter between several other categories. Once again, a majority of compounds with the Figure 2 substructure don't actually map to the phenotype shown (while plenty of other structural types do). What use, exactly, is Figure 2 supposed to be?

Let's turn to some other structures in it. The impossible/implausible ones, as mentioned above, turn out to be that way because they're supposed to have substituents on them. But look around - adamantane is on there. To put it as kindly as possible, adamantane itself is not much of a pharmacophore, having nothing going for it but an odd size and shape for grease. Tetrahydrofuran (THF) is on there, too, and similar objections apply. When attempts have been made to rank the sorts of functional groups that are likely to interact with protein binding sites, ethers always come out poorly. THF by itself is not some sort of key structural unit; highlighting it as one here is, for a medicinal chemist, distinctly weird.

What's also weird is when I search for THF-containing compounds that show this activity signature, I can't find much. The only things with a THF ring in them seem to be SGTC_2563 (the complex natural product tomatine) and SGTC_3239, and neither one of them is marked with the signature shown. There are some imbedded THF rings as in the other structural fragments shown (the succinimide-derived Diels-Alder ones), but no other THFs - and as mentioned, it's truly unlikely that the ether is the key thing about these compounds, anyway. If anyone finds another THF compound annotated for tubulin folding, I'll correct this post immediately, but for now, I can't seem to track one down, even though Table S4 says that there are 65 of them. Again, what exactly is Figure 2 supposed to be telling anyone?

Now we come to some even larger concerns. The supplementary material for the paper says that 95% of the compounds on the list are "drug-like" and were filtered by the commercial suppliers to eliminate reactive compounds. They do caution that different people have different cutoffs for this sort of thing, and boy, do they ever. There are many, many compounds in this collection that I would not have bothered putting into a cell assay, for fear of hitting too many things and generating uninterpretable data. Quinone methides are a good example - as mentioned before, they're in this set. Rhodanines and similar scaffolds are well represented, and are well known to hit all over the place. Some of these things are tested at hundreds of micromolar.

I recognize that one aim of a study like this is to stress the cells by any means necessary and see what happens, but even with that in mind, I think fewer nasty compounds could have been used, and might have given cleaner data. The curves seen in the supplementary data are often, well, ugly. See the comments section from the Friday post on that, but I would be wary of interpreting many of them myself.
insolubles.png
There's another problem with these compounds, which might very well have also led to the nastiness of the assay curves. As mentioned on Friday, how can anyone expect many of these compounds to actually be soluble at the levels shown? I've shown a selection of them here; I could go on. I just don't see any way that these compounds can be realistically assayed at these levels. Visual inspection of the wells would surely show cloudy gunk all over the place. Again, how are such assays to be interpreted?

And one final point, although it's a big one. Compound purity. Anyone who's ever ordered three thousand compounds from commercial and public collections will know, will be absolutely certain that they will not all be what they say on the label. There will be many colors and consistencies, and LC/MS checks will show many peaks for some of these. There's no way around it; that's how it is when you buy compounds. I can find no evidence in the paper or its supplementary files that any compound purity assays were undertaken at any point. This is not just bad procedure; this is something that would have caused me to reject the paper all by itself had I refereed it. This is yet another sign that no one who's used to dealing with medicinal chemistry worked on this project. No one with any experience would just bung in three thousand compounds like this and report the results as if they're all real. The hits in an assay like this, by the way, are likely to be enriched in crap, making this more of an issue than ever.

Damn it, I hate to be so hard on so many people who did so much work. But wasn't there a chemist anywhere in the room at any point?

Comments (33) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | The Scientific Literature

April 11, 2014

Biology Maybe Right, Chemistry Ridiculously Wrong

Email This Entry

Posted by Derek

Note: critique of this paper continues here, in another post.

A reader sent along a puzzled note about this paper that's out in Science. It's from a large multicenter team (at least nine departments across the US, Canada, and Europe), and it's an ambitious effort to profile 3250 small molecules in a broad chemogenomics screen in yeast. This set was selected from an earlier 50,000 compounds, since these realiably inhibited the growth of wild-type yeast. They're looking for what they call "chemogenomic fitness signatures", which are derived from screening first against 1100 heterozygous yeast strains, one gene deletion per, representing the yeast essential genome. Then there's a second round of screening against 4800 homozygous deletions strain of non-essential genes, to look for related pathways, compensation, and so on.

All in all, they identified 317 compounds that appear to perturb 121 genes, and many of these annotations are new. Overall, the responses tended to cluster in related groups, and the paper goes into detail about these signatures (and about the outliers, which are naturally interested for their own reasons). Broad pathway effects like mitrochondrial stress show up pretty clearly, for example. And unfortunately, that's all I'm going to say for now about the biology, because we need to talk about the chemistry in this paper. It isn't good.

enamine.png
phenol.pngAs my correspondent (a chemist himself) mentions, a close look at Figure 2 of the paper raises some real questions. Take a look at that cyclohexadiene enamine - can that really be drawn correctly, or isn't it just N-phenylbenzylamine? The problem is, that compound (drawn correctly) shows up elsewhere in Figure 2, hitting a completely different pathway. These two tautomers are not going to have different biological effects, partly because the first one would exist for about two molecular vibrations before it converted to the second. But how could both of them appear on the same figure?

And look at what they're calling "cyclohexa-2,4-dien-1-one". No such compound exists as such in the real world - we call it phenol, and we draw it as an aromatic ring with an OH coming from it. Thiazolidinedione is listed as "thiazolidine-2,4-quinone". Both of these would lead to red "X" marks on an undergraduate exam paper. It is clear that no chemist, not even someone who's been through second-year organic class, was involved in this work (or at the very least, involved in the preparation of Figure 2). Why not? Who reviewed this, anyway?

There are some unusual features from a med-chem standpoint as well. Is THF really targeting tubulin folding? Does adamantane really target ubiquinone biosynthesis? Fine, these are the cellular effects that they noted, I guess. But the weirdest thing on Figure 2's annotations is that imidazole is shown as having one profile, while protonated imidazole is shown as a completely different one. How is this possible? How could anyone who knows any chemistry look at that and not raise an eyebrow? Isn't this assay run in some sort of buffered medium? Don't yeast cells have any buffering capacity of their own? Salts of basic amine drugs are dosed all the time, and they are not considered - ever - as having totally different cellular effects. What a world it would be if that were true! Seeing this sort of thing makes a person wonder about the rest of the paper.

Nitro.pngMore subtle problems emerge when you go to the supplementary material and take a look at the list of compounds. It's a pretty mixed bag. The concentrations used for the assays vary widely - rapamycin gets run at 1 micromolar, while ketoconazole is nearly 1 millimolar. (Can you even run that compound at that concentration? Or that compound at left at 967 micromolar? Is it really soluble in the yeast wells at such levels? There are plenty more that you can wonder about in the same way.

And I went searching for my old friends, the rhodanines, and there they were. Unfortunately, compound SGTC_2454 is 5-benzylidenerhodanine, whose activity is listed as "A dopamine receptor inhibitor" (!). But compound SGTC_1883 is also 5-benzylidenerhodanine, the same compound, run at similar concentration, but this time unannotated. The 5-thienylidenerhodanine is SGTC_30, but that one's listed as a phosphatase inhibitor. Neither of these attributions seem likely to me. There are other duplicates, but many of them are no doubt intentional (run by different parts of the team).

I hate to say this, but just a morning's look at this paper leaves me with little doubt that there are still more strange things buried in the chemistry side of this paper. But since I work for a living (dang it), I'm going to leave it right here, because what I've already noted is more than troubling enough. These mistakes are serious, and call the conclusions of the paper into question: if you can annotate imidazole and its protonated form into two different categories, or annotate two different tautomers (one of which doesn't really exist) into two different categories, what else is wrong, and how much are these annotations worth? And this isn't even the first time that Science has let something like this through. Back in 2010, they published a paper on the "Reactome" that had chemists around the world groaning. How many times does this lesson need to be learned, anyway?

Update: this situation brings up a number of larger issues, such as the divide between chemists and biologists (especially in academia?) and the place of organic chemistry in such high-profile publications (and the place of organic chemists as reviewers of it). I'll defer these to another post, but believe me, they're on my mind.

Update 2 Jake Yeston, deputy editor at Science, tells me that they're looking into this situation. More as I hear it.

Update 3: OK, if Figure 2 is just fragments, structural pieces that were common to compounds that had these signatures, then (1) these are still not acceptable structures, even as fragments, and (2), many of these don't make sense from a medicinal chemistry standpoint. It's bizarre to claim a tetrahydrofuran ring (for example) as the key driver for a class of compounds; the chance that this group is making an actual, persistent interaction with some protein site (or family of sites) is remote indeed. The imidazole/protonated imidazole pair is a good example of this: why on Earth would you pick these two groups to illustrate some chemical tendency? Again, this looks like the work of people who don't really have much chemical knowledge.

0560-0053.pngA closer look at the compounds themselves does not inspire any more confidence. There's one of them from Table S3, which showed a very large difference in IC50 across different yeast strains. It was tested at 400 micromolar. That, folks, was sold to the authors of this paper by ChemDiv, as part of a "drug-like compound" library. Try pulling some SMILES strings from that table yourself and see what you think about their drug likeness.

Comments (129) + TrackBacks (0) | Category: Chemical Biology | Chemical News | The Scientific Literature

April 10, 2014

Encoded Libraries Versus a Protein-Protein Interaction

Email This Entry

Posted by Derek

So here's the GSK paper on applying the DNA-encoded library technology to a protein-protein target. I'm particularly interested in seeing the more exotic techniques applied to hard targets like these, because it looks like there are plenty of them where we're going to need all the help we can get. In this case, they're going after integrin LFA-1. That's a key signaling molecule in leukocyte migration during inflammation, and there was an antibody (Raptiva, efalizumab) on the market, until it was withdrawn for too many side effects. (It dialed down the immune system rather too well). But can you replace an antibody with a small molecule?

A lot of people have tried. This is a pretty well-precedented protein-protein interaction for drug discovery, although (as this paper mentions), most of the screens have been direct PPI ones, and most of the compounds found have been allosteric - they fit into another spot on LFA-1 and disrupt the equilibrium between a low-affinity form and the high-affinity one. In this case, though, the GSK folks used their encoded libraries to screen directly against the LFA-1 protein. As usual, the theoretical number of compounds in the collection was bizarre, about 4 billion compounds (it's the substituted triazine library that they've described before).

An indanyl amino acid in one position on the triazine seemed to be a key SAR point in the resulting screen, and there were at least four other substituents at the next triazine point that kept up its activity. Synthesizing these off the DNA tags gave double-digit nanomolar affinities (if they hadn't, we wouldn't be hearing about this work, I'm pretty sure). Developing the SAR from these seems to have gone in classic med-chem fashion, although a lot of classic med-chem programs would very much like to be able to start off with some 50 nM compounds. The compounds were also potent in cell adhesion assays, with an interesting twist - the team also used a mutated form of LFA-1 where a disulfide holds it fixed in the high-affinity state. The known small-molecule allosteric inhibitors work against wild-type in this cell assay, but wipe out against the locked mutant, as they should. These triazines showed the same behavior; they also target the allosteric site.

That probably shouldn't have come as a surprise. Most protein-protein interactions have limited opportunities for small molecules to affect them, and if there's a known friendly spot like the allosteric site here, you'd have to expect that most of your hits are going to be landing on it. You wonder what might happen if you ran the ELT screen against the high-affinity-locked mutant protein - if it's good enough to work in cells, it should be good enough to serve in a screen for non-allosteric compounds. The answer (most likely) is that you sure wouldn't find any 50 nM leads - I wonder what you'd find at all? Running four billion compounds across a protein surface and finding no real hits would be a sobering experience.

The paper finishes up by showing the synthesis of some fluorescently tagged derivatives, and showing that these also work in cell assay. The last sentence is : "The latter phenomena provided an opportunity for ELT selections against a desired target in its natural state on cell surface. We are currently exploring this technology development opportunity." I wonder if they are? For the same reasons given above, you'd expect to find mostly allosteric binders, and those already seem to be findable. And it's my impression that this is the early-stage ELT stuff (the triazine library), plus, when you look at the list of authors, there are several "Present address" footnotes. So this work was presumably done a while back and is just now coming into the light.

So the question of using this technique against PPI targets remains open, as far as I can tell. This one had already been shown to yield small-molecule hits, and it did so again, in the same binding pocket. What happens when you set out into the unknown? Presumably, GlaxoSmithKline (and the other groups pursuing encoded libraries) know a lot more about than the rest of us do. Surely some screens like this have been run. Either they came up empty - in which case we'll never hear about them - or they actually yielded something interesting, in which case we'll hear about them over the next few years. If you want to know the answer before then, you're going to have to run some yourself. Isn't that always the way?

Comments (17) + TrackBacks (0) | Category: Chemical Biology | Drug Assays

April 9, 2014

The State of Alzheimer's Research, 2014

Email This Entry

Posted by Derek

Via Bernard Munos on Twitter, here's a report from the New York Academy of Sciences looking at the current state of Alzheimer's research. Those various tabs are all live; you can get summaries of each one by clicking.

Looking them over breeds a mixture of hope and despair. The whole thing is themed around the 2025 target that many in the Alzheimer's world have been talking about. And while I understand the need for goals, etc., that year seems way too close. If a promising new compound were to be discovered this afternoon, it wouldn't make it. That brings up another point - many of the speakers at this meeting were talking about moving away from a "compound-centric" point of view. I can see (some of) the point, because there may well be other things to do for Alzheimer's patients. But it's also worth remembering that the reason people are talking like this is that no compounds have worked. This outlook is a second choice driven by necessity, not by some sort of obvious first principle.

And I think that, in the end, Alzheimer's will be arrested by compounds - more than one, most likely, and some of them are quite possibly going to be biomolecules, but compounds all the same. Reading the recommendations about adaptive clinical trials (good idea), broader cooperation and use of common clinical standards (another good idea), and all the others just make me wonder: clinical trials of what? That's the real stumper in this field; where to go next. How to go there is a topic that it's easier to reach agreement on.

Comments (40) + TrackBacks (0) | Category: Alzheimer's Disease

AstraZeneca's Cambridge Move

Email This Entry

Posted by Derek

Here's more on AstraZeneca's move to Cambridge (UK). They've set up an agreement with the Medical Research Council to have MRC people working "alongside" AZ people, although details seem pretty short on how that's going to happen in practice. Here's some of it, though:

Within the AstraZeneca MRC UK Centre for Lead Discovery, the academics will get access to more than 2 million compounds in AstraZeneca's library and have the use of high-tech screening equipment to study diseases and possible treatments.

Their research proposals will be assessed by the MRC, which will fund up to 15 projects a year and AstraZeneca will have the first option to license any resulting drug discovery programs.

I liked this part of the article as well:

Other large drugmakers have built research outposts in life science centers like Cambridge, Boston and San Francisco - but none have undertaken such a wholesale move of operations.

The strategy is not without risks, especially if the upheaval disrupts current research projects or results in key staff leaving the company. A smooth transition is seen as a key test for CEO Soriot as he tries to change the culture at AstraZeneca to put science at the center of its activities.

What was at the center of AZ's operations before?

Comments (28) + TrackBacks (0) | Category: Business and Markets | Business and Markets

April 8, 2014

A Call For Better Mouse Studies

Email This Entry

Posted by Derek

Here's an article by Steve Perrin, at the ALS Therapy Development Institute, and you can tell that he's a pretty frustrated guy. With good reason.
ALS%20chart.png
That chart shows why. Those are attempted replicates of putative ALS drugs, and you can see that there's a bit of a discrepancy here and there. One problem is poorly run mouse studies, and the TDI has been trying to do something about that:

After nearly a decade of validation work, the ALS TDI introduced guidelines that should reduce the number of false positives in preclinical studies and so prevent unwarranted clinical trials. The recommendations, which pertain to other diseases too, include: rigorously assessing animals' physical and biochemical traits in terms of human disease; characterizing when disease symptoms and death occur and being alert to unexpected variation; and creating a mathematical model to aid experimental design, including how many mice must be included in a study. It is astonishing how often such straightforward steps are overlooked. It is hard to find a publication, for example, in which a preclinical animal study is backed by statistical models to minimize experimental noise.

All true, and we'd be a lot better off if such recommendations were followed more often. Crappy animal data is far worse than no animal data at all. But the other part of the problem is that the mouse models of ALS aren't very good:

. . .Mouse models expressing a mutant form of the RNA binding protein TDP43 show hallmark features of ALS: loss of motor neurons, protein aggregation and progressive muscle atrophy.

But further study of these mice revealed key differences. In patients (and in established mouse models), paralysis progresses over time. However, we did not observe this progression in TDP43-mutant mice. Measurements of gait and grip strength showed that their muscle deficits were in fact mild, and post-mortem examination found that the animals died not of progressive muscle atrophy, but of acute bowel obstruction caused by deterioration of smooth muscles in the gut. Although the existing TDP43-mutant mice may be useful for studying drugs' effects on certain disease mechanisms, a drug's ability to extend survival would most probably be irrelevant to people.

A big problem is that the recent emphasis on translational research in academia is going to land many labs right into these problems. As the rest of that Nature article shows, the ways for a mouse study to go wrong are many, various, and subtle. If you don't pay very close attention, and have people who know what to pay attention to, you could be wasting time, money, and animals to generate data that will go on to waste still more of all three. I'd strongly urge anyone doing rodent studies, and especially labs that haven't done or commissioned very many of them before, to read up on these issues in detail. It slows things down, true, and it costs money. But there are worse things.

Comments (19) + TrackBacks (0) | Category: Animal Testing | The Central Nervous System

Biotech Boom, Biotech Bust?

Email This Entry

Posted by Derek

Here's a good one by Matthew Herper on "Three Misplaced Assumptions That Could End the Biotech Boom". Given the way the biotech stock index has been performing lately, with a horrendous March and April that's taken it into negative territory for the year to date, something definitely seems to be causing a change of mind.

I'll let you see what Herper's three assumptions are, but I can tell you already that they sound valid to me. I think that his points are particularly relevant to investors who may have been jumping on the stocks in the area without having a clear idea of what the industry is really like. As he says, ". . .investors should avoid thinking that the drug business has undergone a fundamental change in the past few years. It hasn’t." It's the same fun-filled thrill ride as ever!

Comments (2) + TrackBacks (0) | Category: Business and Markets

Can You Patent A Natural Product? Prepare For a Different Answer

Email This Entry

Posted by Derek

So, can you patent naturally occurring substances, or not? That's a rather complicated question, and some recent Supreme Court decisions have recomplicated it in US patent law. Mayo v. Prometheus and Assoc. Mol. Pathology v. Myriad Genetics. The latter, especially, has sent the PTO (and the IP lawyers) back to staring out their respective windows, thinking about what to do next.

The Patent Office has now issued new guidelines for its examiners in light of these rulings, though, and things may be changing. Previous standards for patenting naturally occurring compounds have been tightened up - if I'm reading this correctly, no longer is the process of isolation and purification itself seen as enough of a modification to make a case for patentability. The four "judicial exception" categories, to be used in patentability decisions, are (1) abstract ideas, (2) laws of nature, (3) natural phenomena, and (4) natural products. And examiners are specifically asked to determine if a patent application's claims recite something "significantly different" than these.

Here's the blog of an IP firm that thinks that the USPTO has gone too far:

Now we learn that grant of these and similar patents were mistakes, that 100 years of consistent practice in the field of patents was wrong, that what was invented was no more than products of nature without significant structural difference from the naturally-occurring materials, and that the USPTO will endeavour to avoid such mistakes in future. . .

. . .Whatever workable rule of law is derivable from Prometheus, it is apparent from the opinion of Justice Breyer that it was not the Court’s intention to bring about a radical change in pharmaceutical practice. The opinion gives a warning against undue breadth:

“The Court has recognized, however, that too broad an interpretation of this exclusionary principle could eviscerate patent law. For all inventions at some level embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas.”

The problem (and it's the usual problem with fresh patent law) is that we really don't know what the phrases in the decisions or guidance mean, in practice, until there's been some practice. This is going to be thrashed out application by application, lawsuit by lawsuit, until some new equilibrium is reached. Right now, though, if you're trying to patent something that could be considered an isolated natural product, your life has become much more complicated and uncertain. Here's another IP law firm:

What is the "significantly different" standard? With respect to natural products, the Guidance offers that what is claimed should be "non-naturally occurring and markedly different in structure from the naturally occurring products". Again, it is unclear at this point how different "markedly different" will be. How different it needs to be will be worked out on a case-by-case basis, beginning at the level of the patent examiner at the USPTO.

So how can you protect your IP if it involves subject matter that could be considered a "product of nature" by a US examiner? Since we don't yet really know how different "markedly different" is, one prudent strategy would be to include multiple claims having varying degrees of modifications relative to the naturally occurring thing, to the extent these makes sense commercially and scientifically. The more different your claimed product is from the naturally occurring thing, the more likely it is to be considered patent eligible by the USPTO.

Comments (19) + TrackBacks (0) | Category: Patents and IP

April 7, 2014

Is Palbociclib Promising? Or Not?

Email This Entry

Posted by Derek

Here's a good test for whatever news outlets you might be using for biotech information. How are they handling Pfizer's release of palbociclib information from the AACR meeting over the weekend?

Do a news search for the drug's name, and you'll see headline after headline. Many of them include the phrase "Promising Results". And from one standpoint, those words are justified. The drug showed a near-doubling in progression-free survival (PFS) when added to the standard of care, and you'd think that that has to be good. But a first analysis of overall survival (OS) shows no statistically significant improvement.

Now, how can that be? One possibility is that the drug helps hold advanced breast cancer back, until a population of cells breaks through - and when they do, it's a very fast-moving bunch indeed. Pfizer, for its part, is certainly hoping that further collection of data will start to show a real OS effect. They're going to need to - Avastin's provisional approval for breast cancer was based on earlier PFS numbers, which did not hold up when OS data came in. And that approval was revoked, as it should have been. Now, Avastin also had side effect issues, and quality-of-life issues, so these cases aren't directly comparable. But the FDA really wants to see a survival benefit, and that's what a new cancer drug really should offer. "You'll die at the same time, but with fewer tumors, and out more money" is not an appealing sales pitch. This issue has come up several times before, with other drugs, and it will come up again.

You'd think that a PFS effect like palbociclib's should translate into a real survival benefit, and as more data are added, it may well. But it's surely not going to be as impressive as people had hoped for, or it would have been apparent in the data we have. So take a look at the stories you're reading on the drug: if they mention this issue, good. If they just talk about what a promising drug for breast cancer palbociclib is, then that reporter (and that news outlet) is not providing the full story. (Here's one that does).

Update: there is an ongoing Phase III that's more specifically looking at overall survival. Its results will be awaited with great interest. . .

Comments (22) + TrackBacks (0) | Category: Cancer | Press Coverage

Outsourcing Everything

Email This Entry

Posted by Derek

Here's an article in Drug Discovery Today on "virtual pharmaceutical companies", and people who've been around the industry for some years must be stifling yawns already. That idea has been around a long time. The authors here defined a "VPC" as one that has a small managerial core, and outsources almost everything else:

The goal of a VPC is to reach fast proof of concept (PoC) at modest cost, which is enabled by the lack of expensive corporate infrastructure to be used for the project and by foregoing activities, such as synthesis optimization, which are unnecessary for the demonstration of PoC. . .The term ‘virtual’ refers to the business model of such a company based on the managerial core, which coordinates all activities with external providers, and on the lack of internal production or development facilities, rather than to the usage of the internet or electronic communication. Any service provider available on the market can be chosen for a project, because almost no internal investments in fixed assets are made.

And by necessity, such a company lives only to make deals with a bigger (non-virtual) company, one that can actually do the clinical trials, manufacturing, regulatory, sales and so on. There's another necessity - such a company has to get pretty nice chemical matter pretty quickly, it seems to me, in order to have something to develop. The longer you go digging through different chemical series and funny-looking SAR, all while doing it with outsourced chemistry and biology, the worse off you're going to be. If things are straightforward, it could work - but when things are straightforward, a lot of stuff can work. The point of having your own scientists (well, one big point) is for them to be able to react in real time to data and make their own decisions on where to go next. The better outsourcing people can do some of that, too, but their costs are not that big a savings, for that very reason. And it's never going to be as nimble as having your own researchers in-house. (If your own people aren't any more nimble than lower-priced contract workers, you have a different problem).

The people actually doing the managing have to be rather competent, too:

All these points suggest that the know-how and abilities of the members of the core management team are central to the success of a VPC, because they are the only ones with the full in-depth knowledge concerning the project. The managers must have strong industrial and academic networks, be decisive and unafraid to pull the plug on unpromising projects. They further need extensive expertise in drug development and clinical trial conduction, proven leadership and project management skills, entrepreneurial spirit and proficiency in handling suppliers. Of course, the crucial dependency on the skills of every single team member leaves little room for mistakes or incompetency, and the survival of a VPC might be endangered if one of its core members resigns unexpectedly

I think that the authors wanted to say "incompetence" rather than "incompetency" up there, but I believe that they're all native German speakers, so no problem. If that had come from some US-based consultants, I would have put it down to the same mental habit that makes people say "utilized" instead of "used". But the point is a good one: the smaller the organization, the less room there is to hide. A really large company can hol (and indeed, tends to accumulate) plenty of people who need the cover.

The paper goes on to detail several different ways that a VPC can work with a larger company. One of the ones I'm most curious about is the example furnished by Chorus and Eli Lilly. Chorus was founded from within Lilly as a do-everything-by-outsourcing team, and over the yeras, Lilly's made a number of glowing statements about how well they've worked out. I have, of course, no inside knowledge on the subject, but at the same time, many other large companies seem to have passed on the opportunity to do the same thing.

I continue to see the "VPC" model as a real option, but only in special situations. When there's a leg up on the chemistry and/or biology (a program abandoned by a larger company for business reasons, an older compound repurposed), then I think it can work. But trying it completely from the ground up seems problematic to me, but that could be because I've always worked in companies with in-house research. And it's true that even the stuff that's going on right down the hall doesn't work out all that often. One response to that is to say "Well, then, why not do the same thing more cheaply?" But another response is "If the odds are bad with your own people under your own roof, what are they when you contract everything out?"

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development

Cancer Immunotherapy's Growing Pains

Email This Entry

Posted by Derek

Cancer immunotherapy, which I've written about several times here (and which has claimed the constant attention of biopharma investors for some time now) has run into an inevitable difficulty: its patients are very sick, and its effects are very strong. Sloan-Kettering announced over the weekend that it's having to halt recruitment in a chimeric antigen receptor (CAR) T-cell trial against non-Hodgkin's lymphoma:

Six patients died of either disease relapse or progression, said MSK, while two patients died in remission from complications related to allogeneic bone marrow transplantation. An additional two patients died within two weeks of receiving a CAR-T cell infusion.

"The first of these two patients had a prior history of cardiac disease and the second patient died due to complications related to persistent seizure activity," noted MSK's presentation. "As a matter of routine review of adverse events on study, our center made a decision to pause enrollment and review these two patients in detail."

This study is associated with Juno Therapeutics, and the company says that it expects to continue once the review is finished. There's a huge amount of activity in this area, with Juno as one of the main players, and Novartis (who are working with the team at Penn) as another. Unfortunately, that activity is both legal and scientific; the patent situation in this area has yet to be clarified. This is an extremely promising approach, but it has a long way to go.

Comments (8) + TrackBacks (0) | Category: Cancer | Clinical Trials

April 4, 2014

GSK Dismisses Employees in Bribery Scandal. Apparently.

Email This Entry

Posted by Derek

Someone is letting it be known that GlaxoSmithKline has fired some of its employees in China in relation to the long-running bribery scandal there. This is one of those times when it's worth asking the "Cui bono?" follow-up question. Is this some sort of semi-authorized release, designed to show other GSK employees that the company is serious? Or to demonstrate the same, publicly, to the Chinese authorities? Or is someone honestly just letting this information out on their own - and if so, why?

Comments (20) + TrackBacks (0) | Category: Business and Markets

Ancient Modeling

Email This Entry

Posted by Derek

I really got a kick out of this picture that Wavefunction put up on Twitter last night. It's from a 1981 article in Fortune, and you'll just have to see the quality of the computer graphics to really appreciate it.

That sort of thing has hurt computer-aided drug design a vast amount over the years. It's safe to say that in 1981, Merck scientists did not (as the article asserts) "design drugs and check out their properties without leaving their consoles". It's 2014 and we can't do it like that yet. Whoever wrote that article, though, picked those ideas up from the people at Merck, with their fuzzy black-and-white monitor shots of DNA from three angles. (An old Evans and Sutherland terminal?) And who knows, some of the Merck folks may have even believed that they were close to doing it.

But computational power, for the most part, only helps you out when you already know how to calculate something. Then it does it for you faster. And when people are impressed (as they should be) with all that processing power can do for us now, from smart phones on up, they should still realize that these things are examples of fast, smooth, well-optimized versions of things that we know how to calculate. You could write down everything that's going on inside a smart phone with pencil and paper, and show exactly what it's working out when it display this pixel here, that pixel there, this call to that subroutine, which calculates the value for that parameter over there as the screen responds to the presence of your finger, and so on. It would be wildly tedious, but you could do it, given time. Someone, after all, had to program all that stuff, and programming steps can be written down.

The programs that drove those old DNA pictures could be written down, too, of course, and in a lot less space. But while the values for which pixels to light up on the CRT display were calculated exactly, the calculations behind those were (and are) a different matter. A very precise-looking picture can be drawn and animated of an animal that does not exist, and there are a lot of ways to draw animals that do not exist. The horse on your screen might look exact in every detail, except with a paisley hide and purple hooves (my daughter would gladly pay to ride one). Or it might have a platypus bill instead of a muzzle. Or look just like a horse from outside, but actually be filled with helium, because your program doesn't know how to handle horse innards. You get the idea.

The same for DNA, or a protein target. In 1981, figuring out exactly what happened as a transcription factor approached a section of DNA was not possible. Not to the degree that a drug designer would need. The changing conformation of the protein as it approaches the electrostatic field of the charged phosphate residues, what to do with the water molecules between the two as they come closer, the first binding event (what is it?) between the transcription factor and the double helix, leading to a cascade of tradeoffs between entropy and enthalpy as the two biomolecules adjust to each other in an intricate tandem dance down to a lower energy state. . .that stuff is hard. It's still hard. We don't know how to model some of those things well enough, and the (as yet unavoidable) errors and uncertainties in each step accumulate the further you go along. We're much better at it than we used to be, and getting better all the time, but there's a good way to go yet.

But while all that's true, I'm almost certainly reading too much into that old picture. The folks at Merck probably just put one of their more impressive-looking things up on the screen for the Fortune reporter, and hey, everyone's heard of DNA. I really don't think that anyone at Merck was targeting protein-DNA interactions 33 years ago (and if they were, they splintered their lance against that one, big-time). But the reporter came away with the impression that the age of computer-designed drugs was at hand, and in the years since, plenty of other people have seen progressively snazzier graphics and thought the same thing. And it's hurt the cause of modeling for them to think that, because the higher the expectations get, the harder it is to come back to reality.

Update: I had this originally as coming from a Forbes article; it was actually in Fortune.

Comments (22) + TrackBacks (0) | Category: Drug Industry History | In Silico

April 3, 2014

More Fukuyama Corrections

Email This Entry

Posted by Derek

The Fukuyama group has another series of corrections out, this time in JACS. Here's one, and the other follow right behind it in the ASAP queue. This adds to the string of them in Organic Letters. It's more whiteout stuff - vanishing solvent peaks and impurities. These presumably don't affect the conclusions of the paper, but they don't make a person any more confident, either. One hopes that these high-profile cases will shake people up. . .

Comments (24) + TrackBacks (0) | Category: The Scientific Literature

Reality-Based Biotech Investing

Email This Entry

Posted by Derek

David Sable has some useful rules for investing biotech stocks (more here). On the surface, many of these may look more applicable to people who are managing larger amounts of money, because he's talking about what to do (and not do) when you're walking around the JP Morgan healthcare conference, and so on. But the lessons behind his advice are sound for everyone - for example:

". . .stop looking for code words, Groucho Marx eyebrow raising, or any other type of "body language" silliness from insiders."

The corollary to that is that if you're thinking about investing in a small company that acts as if it's doing this sort of thing, or has been touted to you on the basis of such, turn around and look somewhere else. (Even worse, if you find yourself working for a company like this, you'd better start making plans). This is a sign of what I think of as the "professional wrestling" school of investing - it's the world of the people who see the market as a titanic battle between Good and Evil, the Good being the people who own the wonderful company's stock, and the Evil, naturally, being the Evil Shorts and Paid Bashers. As with other forms of conspiratorial thinking, it's easy for someone with this attitude to dismiss good advice (if exposed to same) by saying that the person offering it is naive - not clued in, wised up, or verb-prepositioned in general. If you knew how the world really works, you'd realize that the recent moves in the stock are all so transparent - it's the money managers, you see, who are trying to shake the shares from the weak hands so they can accumulate it in front of the Big Announcement.

The world doesn't work that way, I think, or not at the retail market level, at any rate. It's not a show, and there's no script. Many people investing in small biotech stocks have a reality-TV view of the world, when reality would serve them far better.

Comments (9) + TrackBacks (0) | Category: Business and Markets

April 2, 2014

Binding Assays, Inside the Actual Cells

Email This Entry

Posted by Derek

Many readers will be familiar, at least in principle, with the "thermal shift assay". It goes by other names as well, but the principle is the same. The idea is that when a ligand binds to a protein, it stabilizes its structure to some degree. This gets measured by watching its behavior as samples of bound and unbound proteins are heated up, and the most common way to detect those changes in protein structure (and stability) is by using a fluorescent dye. Thus another common name for the assay, DSF, for Differential Scanning Fluorimetry. The dye has a better chance to bind to the newly denatured protein once the heat gets to that point, and that binding even can be detected by increasing fluorescence. The assay is popular, since it doesn't require much in specialized equipment and is pretty straightforward to set up, compared to something like SPR. Here's a nice slide presentation that's up on the web from UC Santa Cruz, and here's one of many articles on using the technique for screening.

I bring this up because of this paper last suumer in Science, detailing what the authors (a mixed team from Sweden and Singapore) called CETSA, the cellular thermal shift assay. They trying to do something that is very worthwhile indeed: measuring ligand binding inside living cells. Someone who's never done drug discovery might imagine that that's the sort of thing that we do all the time, but in reality, it's very tricky. You can measure ligand binding to an isolated protein in vitro any number of ways (although they may or may not give you the same answer!), and you can measure downstream effects that you can be more (or less) confident are the result of your compound binding to a cellular target. But direct binding measurements in a living cell are pretty uncommon.

I wish they weren't. Your protein of interest is going to be a different beast when it's on the job in its native environment, compared to sitting around in a well in some buffer solution. There are other proteins for it to interact with, a whole local environment that we don't know enough to replicate. There are modifications to its structure (phosphorylation and others) that you may or may not be aware of, which can change things around. And all of these have a temporal dimension, changing under different cellular states and stresses in ways that are usually flat-out impossible to replicate ex vivo.

Here's what this new paper proposes:

We have developed a process in which multiple aliquots of cell lysate were heated to different temperatures. After cooling, the samples were centrifuged to separate soluble fractions from precipitated proteins. We then quantified the presence of the target protein in the soluble fraction by Western blotting . . .

Surprisingly, when we evaluated the thermal melt curve of four different clinical drug targets in lysates from cultured mammalian cells, all target proteins showed distinct melting curves. When drugs known to bind to these proteins were added to the cell lysates, obvious shifts in the melting curves were detected. . .

That makes it sound like the experiments were all done after the cells were lysed, which wouldn't be that much of a difference from the existing thermal shift assays. But reading on, they then did this experiment with methotrexate and its enzyme target, dihydrofolate reductase (DHFR), along with ralitrexed and its target, thymidylate synthase:

DHFR and TS were used to determine whether CETSA could be used in intact cells as well as in lysates. Cells were exposed to either methotrexate or raltitrexed, washed, heated to different temperatures, cooled, and lysed. The cell lysates were cleared by centrifugation, and the levels of soluble target protein were measured, revealing large thermal shifts for DHFR and TS in treated cells as compared to controls. . .

So the thermal shift part of the experiment is being done inside the cells themselves, and the readout is the amount of non-denatured protein left after lysis and gel purification. That's ingenious, but it's also the sort of idea that (if it did occur to you) you might dismiss as "probably not going to work" and/or "has surely already been tried and didn't work". It's to this team's credit that they ran with it. This proves once again the soundness of Francis Crick's advice (in his memoir What Mad Pursuitand other places) to not pay too much attention to your own reasoning about how your ideas must be flawed. Run the experiment and see.

A number of interesting controls were run. Cell membranes seem to be intact during the heating process, to take care of one big worry. The effect of ralitrexed added to lysate was much greater than when it was added to intact cells, suggesting transport and cell penetration effects. A time course experiment showed that it took two to three hours to saturate the system with the drug. Running the same experiment on starved cells gave a lower effect, and all of these point towards the technique doing what it's supposed to be doing - measuring the effect of drug action in living cells under real-world conditions.

There's even an extension to whole animals, albeit with a covalent compound, the MetAP2 inhibitor TNP-470. It's a fumagillin derivative, so it's a diepoxide to start off, with an extra chloroacetamide for good measure. (You don't need that last reactive group, by the way, as Zafgen's MetAP2 compound demonstrates). The covalency gives you every chance to see the effect if it's going to be seen. Dosing mice with the compound, followed by organ harvesting, cell lysis, and heating after the lysis step showed that it was indeed detectable by thermal shift after isolation of the enzyme, in a dose-responsive manner, and that there was more of it in the kidneys than the liver.

Back in the regular assay, they show several examples of this working on other enzymes, but a particularly good one is PARP. Readers may recall the example of iniparib, which was taken into the clinic as a PARP-1 inhibitor, failed miserably, and was later shown not to really be hitting the target at all in actual cells and animals, as opposed to in vitro assays. CETSA experiments on it versus olaparib, which really does work via PARP-1, confirm this dramatically, and suggest that this assay could have told everyone a long time ago that there was something funny about iniparib in cells. (I should note that PARP has also been a testbed for other interesting cell assay techniques).

This leads to a few thoughts on larger questions. Sanofi went ahead with iniparib because it worked in their assays - turns out it just wasn't working through PARP inhibition, but probably by messing around with various cysteines. They were doing a phenotypic program without knowing it. This CETSA technique is, of course, completely target-directed, unless you feel like doing thermal shift measurements on a few hundred (or few thousand) proteins. But that makes me wonder if that's something that could be done. Is there some way to, say, impregnate the gel with the fluorescent shift dye and measure changes band by band? Probably not (the gel would melt, for one thing), but I (or someone) should listen to Francis Crick and try some variation on this.

I do have one worry. In my experience, thermal shift assays have not been all that useful. But I'm probably looking at a sampling bias, because (1) this technique is often used for screening fragments, where the potencies are not very impressive, and (2) it's often broken out to be used on tricky targets that no one can figure out how to assay any other way. Neither of those are conducive to seeing strong effects; if I'd been doing it on CDK4 or something, I might have a better opinion.

With that in mind, though, I find the whole CETSA idea very interesting, and well worth following up on. Time to look for a chance to try it out!

Comments (34) + TrackBacks (0) | Category: Chemical Biology | Drug Assays

April 1, 2014

Freeman Dyson on the PhD Degree

Email This Entry

Posted by Derek

From this interview:

"Oh, yes. I’m very proud of not having a Ph.D. I think the Ph.D. system is an abomination. It was invented as a system for educating German professors in the 19th century, and it works well under those conditions. It’s good for a very small number of people who are going to spend their lives being professors. But it has become now a kind of union card that you have to have in order to have a job, whether it’s being a professor or other things, and it’s quite inappropriate for that. It forces people to waste years and years of their lives sort of pretending to do research for which they’re not at all well-suited. In the end, they have this piece of paper which says they’re qualified, but it really doesn’t mean anything. The Ph.D. takes far too long and discourages women from becoming scientists, which I consider a great tragedy. So I have opposed it all my life without any success at all. . ."

Comments (68) + TrackBacks (0) | Category: General Scientific News

Off To the Publishers

Email This Entry

Posted by Derek

I don't know if my publisher was pulling my leg by having the deadline for the manuscript of "The Chemistry Book" be April 1, but that's what the contract says. And I've sent the thing off, so it's now in the hands of the editors. There's more to be done - I have some more dates to track down, for one, and I'd like to insert some more references for further reading. And then there are the illustrations, for which I've sent along many suggestions, and I'll need to write the captions for those once we've settled on what pictures to use. But the bulk writing is done, I'm glad to say.

Comments (13) + TrackBacks (0) | Category: Blog Housekeeping

Yeah, That Must Be It

Email This Entry

Posted by Derek

I'd sort of suspected this um, breakthrough, in catalysis that See Arr Oh is reporting. But how come more of my reactions don't work, eh? 'Cause there's been all kinds of crud in them, I feel pretty sure. Maybe the various crud subtypes (cruddotypes?) are canceling each other out. . .

Comments (21) + TrackBacks (0) | Category: Chemical News

March 31, 2014

Where The Hot Drugs Come From: Somewhere Else

Email This Entry

Posted by Derek

Over at LifeSciVC, there's a useful look at how many drugs are coming into the larger companies via outside deals. As you might have guessed, the answer is "a lot". Looking at a Goldman Sachs list of "ten drugs that could transform the industry", Bruce Booth says:

By my quick review, it appears as though ~75% of these drugs originated at firms different from the company that owns them today (or owns most of the asset today) – either via in-licensing deal or via corporate acquisitions. Savvy business and corporate development strategies drove the bulk of the list. . .I suspect that in a review of the entire late stage industry pipeline, the imbalanced ratio of external:internal sourcing would largely be intact.

He has details on the ten drugs that Goldman is listing, and on the portfolios of several of the big outfits in the industry, and I think he's right. It would be very instructive to know what the failure rate, industry-wide, of inlicensed compounds like this might be. My guess is that it's still high, but not quite as high as the average for all programs. The inlicensed compounds have had, in theory, more than one set of eyes go over them, and someone had to reach into their wallet after seeing the data, so you'd think that they have to be in a little bit better shape. But a majority still surely fail, given that the industry's rate overall is close to 90% clinical failure (the math doesn't add up if you try to assume that the inlicensed failure rate is too low!)

Also of great interest is the "transformational" aspect. We can assume, I think, that most of the inlicensed compounds came from smaller companies - that's certainly how it looks on Bruce's list. This analysis suggested that smaller companies (and university-derived work) produced more innovative drugs than internal big-company programs, and these numbers might well be telling us the same thing.

This topic came up the last time I discussed a post from Bruce, and Bernard Munos suggested in 2009 that this might be the case as well. It's too simplistic to just say Small Companies Good, Big Companies Bad, because there are some real counterexamples to both of those assertions. But overall, averaged over the industry, there might be something to it.

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

A Quick Clean-Up

Email This Entry

Posted by Derek

Well, while I wasn't watching over the weekend, the comments section to this post kind of veered off the road. I've deleted a number of trolling comments, and all the various replies to them, and further comments to that entry are now closed. I rarely do this sort of thing, but (ironically) I was just saying the other evening that pretty much the only time I delete comments is when they're nothing but ad hominem. There are plenty of other places on the web to trade insults and gibberish (some sites specialize in nothing but), so I don't think it's any great loss to the world if this site doesn't join in. We'll now resume our regularly scheduled programming.

Comments (11) + TrackBacks (0) | Category: Blog Housekeeping

March 28, 2014

More on the UT-Austin Retraction Case

Email This Entry

Posted by Derek

I mentioned an unusual retraction from Organic Letters here last year, and here's some follow-up to the story:

Nearly six years after Suvi Orr received a Ph.D. in chemistry from the University of Texas, the university told her it has decided to do something that institutions of higher learning almost never do: revoke
the degree. Orr, in turn, has sued UT in an effort to hold onto the doctorate that launched her career in the pharmaceutical industry.

Her lawsuit in state district court in Travis County contends that revocation is unwarranted and that the university violated her rights by not letting her defend herself before the dissertation committee that condemned her research long after she graduated. In addition, she says, the committee relied heavily on her former professor, who, she claims, was motivated to “cast the blame elsewhere.”

What a mess. More details as things develop. . .

Comments (17) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

A Huntington's Breakthrough?

Email This Entry

Posted by Derek

Huntington's is a terrible disease. It's the perfect example of how genomics can only take you so far. We've known since 1993 what the gene is that's mutated in the disease, and we know the protein that it codes for (Huntingtin). We even know what seems to be wrong with the protein - it has a repeating chain of glutamines on one end. If your tail of glutamines is less than about 35 repeats, then you're not going to get the disease. If you have 36 to 39 repeats, you are in trouble, and may very well come down with the less severe end of Huntington's. If there are 40 or more, doubt is tragically removed.

So we can tell, with great precision, if someone is going to come down with Huntington's, but we can't do a damn thing about it. That's because despite a great deal of work, we don't really understand the molecular mechanism at work. This mutated gene codes for this defective protein, but we don't know what it is about that protein that causes particular regions of the brain to deteriorate. No one knows what all of Huntingtin's functions are, and not for lack of trying, and multiple attempts to map out its interactions (and determine how they're altered by a too-long N-terminal glutamine tail) have not given a definite answer.

But maybe, as of this week, that's changed. Solomon Snyder's group at Johns Hopkins has a paper out in Nature that suggests an actual mechanism. They believe that mutant Huntingtin binds (inappropriately) a transcription factor called "specificity protein 1", which is known to be a major player in neurons. Among other things, it's responsible for initiating transcription of the gene for an enzyme called cystathionine γ-lyase. That, in turn, is responsible for the last step in cysteine biosynthesis, and put together, all this suggests a brain-specific depletion of cysteine. Update: this could have numerous downstream consequences - this is the pathway that produces hydrogen sulfide, which the Snyder group has shown is an important neurotransmitter (one of several they've discovered), and it's also involved in synthesizing glutathione. Cysteine itself is, of course, often a crucial amino acid in many protein structures as well.)

Snyder is proposing this as the actual mechanism of Huntington's, and they have shown, in human tissue culture and in mouse models of the disease, that supplementation with extra cysteine can stop or reverse the cellular signs of the disease. This is a very plausible theory (it seems to me), and the paper makes a very strong case for it. It should lead to immediate consequences in the clinic, and in the labs researching possible therapies for the disease. And one hopes that it will lead to immediate consequences for Huntington's patients themselves. If I knew someone with the Huntingtin mutation, I believe that I would tell them to waste no time taking cysteine supplements, in the hopes that some of it will reach the brain.

Comments (20) + TrackBacks (0) | Category: Biological News | The Central Nervous System

March 27, 2014

A Look Back at Big Pharma Stocks

Email This Entry

Posted by Derek

Big%20pharma%20since%202010.png
Big%20pharma%20since%202004.png
Big%20pharma%20since%202000.png
Big%20pharma%20since%201990.png
Four years ago, I wrote about what I called "Big Pharma's Lost Decade" in the stock market. I thought it would be worth revisiting that, with some different time points.

At the top is the performance of those same big drug companies since I wrote that blog post. Note that Bristol-Myers Squibb has been the place to be during that period (lots of excitement around their oncology pipeline, for one thing). Pfizer has beaten the S&P index over that time as well. And they've done it while paying a higher dividend than the aggregate S&P, too, of course - I'd like to find a way to include dividends into charts like these for an even more real-world comparison. Everyone else is behind.

The next chart shows a ten-year time frame. Bristol-Myers Squibb is still on top, although you'll note that the overall gain is basically the same as the gain since 2010 (that is, it's all come since then). And now J&J is right behind them, and they're the only two whose stock prices have beaten the S&P index over this period. Note that Pfizer and Lilly are actually down from this time point.

Then we have performance since 2000, the twenty-first century chart. Since this was during the Crazy Years in the market, just about everyone is down when measured from here, except for J&J (which is at about the same gain as if you'd started in 2004). The most dramatic mover is Bristol Myers-Squibb - if you bought in at the start of that last chart, you're up 109%. If you bought in at the start of this one, you're down 21%.

And that brings us to the last chart, which is basically "Since I started working in the drug industry". I'd been on the case for about three months by the end of 1990, which is where this one starts. And there are many interesting things to note - first among them, what a big, big deal the latter half of the 1990s were in the stock market. And more specifically, what a big, big deal they were for Pfizer's stock. Holy mackerel, will you look at that chart - compared to the rest of the industry, Pfizer's stock was an absolute monster, and there you have a big driver for all of the company's merger-rific behavior during that period. It paid. Not so much in research results, of course, but it paid the shareholders, and it paid whoever had lots of PFE stock and options. (And it paid the firms on the Street who did the deals with them, too, but that's always the case for them). A really long-term Pfizer shareholder can't be upset at all with the company's performance versus the S&P over that time period. How many have held it, though?

But the other thing to note is J&J. There they are again - it's only in that first chart that they're lagging. Longer-term, they just keep banging away. That, one would have to assume, is at least partly because they've got all those other medical-related businesses keeping them grounded during the whole time. Back when I worked for Bayer, at the Wonder Drug Factory, analysts were forever banging on about how the company just had to, had to break up. Outdated conglomerate model, holding everyone back. So much hotness waiting to be released. But Bayer hasn't been holding up too badly, either, and Bernard Munos has some things to say about both them and J&J.

It is not a good idea (to "undiversify") because, at the moment, we do not have good tools to mitigate risk in drug R&D, which is a problem at the macroeconomic level, because capital does not flow to this industry as it should. Too many investors have been burned too badly and are now investing elsewhere or sitting on the fence, so we need to somehow get better at that. . .we've got to live with the situation where risk in the pharmaceutical industry cannot really be mitigated adequately. You can do portfolio management. Every company has done portfolio management. It has failed miserably across the board. That was supposed to protect everybody against patent cliffs, and everybody has fallen down patent cliffs, so clearly portfolio management has not worked.

Mind you, "undiversifying" is exactly what Pfizer is trying right now. They're not only trying to undo some of the gigantism of all those mergers, they're shedding whatever they have that is Not Pharma. So they're running that experiment for us, as they have some others over the years. . .

Comments (14) + TrackBacks (0) | Category: Business and Markets

Another Target Validation Effort

Email This Entry

Posted by Derek

Here's another target validation initiative, with GSK, the EMBL, and the Sanger Institute joining forces. It's the Centre for Therapeutic Target Validation (CCTV):

CTTV scientists will combine their expertise to explore and interpret large volumes of data from genomics, proteomics, chemistry and disease biology. The new approach will complement existing methods of target validation, including analysis of published research on known biological processes, preclinical animal modelling and studying disease epidemiology. . .

This new collaboration draws on the diverse, specialised skills from scientific institutes and the pharmaceutical industry. Scientists from the Wellcome Trust Sanger Institute will contribute their unique understanding of the role of genetics in health and disease and EMBL-EBI, a global leader in the analysis and dissemination of biological data, will provide bioinformatics-led insights on the data and use its capabilities to integrate huge streams of different varieties of experimental data. GSK will contribute expertise in disease biology, translational medicine and drug discovery.

That's about as much detail as one could expect for now. It's hard to tell what sorts of targets they'll be working on, and by "what sorts" I mean what disease areas, what stage of knowledge, what provenance, and everything else. But the press release goes on to say that the information gathered by this effort will be open to the rest of the scientific community, which I applaud, and that should give us a chance to look under the hood a bit.

It's hard for me to say anything bad about such an effort, other than wishing it done on a larger scale. I was about to say "other than wishing it ten times larger", but I think I'd rather have nine other independent efforts set up than making this one huge, for several reasons. Quis validet ipsos validares, if that's a Latin verb and I haven't mangled it: Who will validate the validators? There's enough trickiness and uncertainty in this stuff for plenty more people to join in.

Comments (11) + TrackBacks (0) | Category: Biological News | Drug Assays

Dichloroacetic Acid, In a New Form

Email This Entry

Posted by Derek

Remember dichloroacetic acid? In 2007, there was a stir about it as a cancer therapy, and on internet forums you still see it referenced as a "cancer cure" that no drug company will touch because it's unpatentable/doesn't have to be taken forever/too cheap/not evil enough, etc.

The people spreading that stuff around don't know how to use PubMed, because a look through the literature will show that DCA is still an active area of research (in some cases, involving people who've taken it on their own). Interestingly, PubMed also makes it apparent that the rest of the literature on the compound is in its role as a water pollutant. But the problem with it as a drug is that it has poor pharmacokinetics. Its site of action is the mitochondrion, but it doesn't do a very good job of getting there (as one would expect from a small molecular weight carboxylic acid, especially one that's as ionized as this one is at body pH).

So here's an attempt to do something about that. The authors, from the University of Georgia, tether several DCA molecules to a scaffold that should do a better job of targeting mitochondria. They go as far as cellular data to prove the point, but there's nothing in vivo (I'm not sure what would happen in that case, but it would seem worth finding out).

This, one should note, is a new molecule, and one that was perfectly capable of being patented - it has novelty, and it apparently has more utility for its stated purpose. Every time you hear about how Evil Pharma won't work on X, or Y, or Z, because "they can't patent it", keep in mind that we here at Evil Pharma know a lot of ways to patent things. Part of what makes us so darn evil, you know.

Comments (18) + TrackBacks (0) | Category: Cancer

March 26, 2014

A New Fluorination

Email This Entry

Posted by Derek

BrittonF.pngNew fluorination reactions are always welcome, and there's one out in Ang. Chem. that looks really interesting. Robert Britton's group at Simon Fraser University report using tetrabutylammonium decatungstate as a photochemistry catalyst with N-fluorobenzenesulfonimide (NFSI). This system fluorinates unsubstituted alkanes, as shown at left, and apparently tolerates several functional groups in the process.

Note that the amino acids were fluorinated as their hydrochloride salts; the free bases didn't work. There aren't any secondary or tertiary amine substrates in the paper, nor are there any heterocycles, both of which are cause to wonder whenever you see a new fluorination method. But I think I'm going to order up some tungstate, turn on the lamp, and see what I get.

Update: via Chemjobber, here's an excellent process chemistry look at scaling up a trifluoromethylation reaction.

Comments (16) + TrackBacks (0) | Category: Chemical News

Getcher Nucleic Acids, Cheap

Email This Entry

Posted by Derek

Via Nathaniel Comfort on Twitter, I note that the health-food people are still selling "DNA supplements". I remember seeing these in a vitamin store some years ago, and wrinkling my brow as I thought about the implications. Does your food have enough DNA in it? Actually, these pills turn out to be 100mg of RNA and only 10mg of DNA, so you might want to adjust your dosages accordingly.

Turns out that the only negative review on the actual site is from someone who's upset that there's so much filler in the pills themselves. More DNA is what he wants. He should try what another guy further down the page does, and swallow five of the things at a time. It gives him "energy", y'know, and he's not alone. Every one of these satisfied customers has felt the energy, and some of them even have picked up a healthy glow to their skin. So there you have it. I thought that peanut M&Ms gave me energy (although maybe not the healthy glow), but I should clearly start snacking on RNA instead.

When I called my wife with this news, her first comment was "RNA from what?" I countered that a whole bottle of pills was only $4.99, and this was (brace yourselves) fifty per cent off the usual price. (In the reviews, one customer found this price very "exceptable"). Anyway, I said, this was not the time to be looking under the hood of such an opportunity. "And how much is shipping?" she wanted to know. I replied that I'm really not sure how I'm still married to her, what with that suspicious nature and all. I tell you.

Comments (31) + TrackBacks (0) | Category: Snake Oil

March 25, 2014

A New Way to Study Hepatotoxicity

Email This Entry

Posted by Derek

Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.

Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.

What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
isoniazid%20image.png
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.

The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.

There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology

March 24, 2014

Google's Big Data Flu Flop

Email This Entry

Posted by Derek

Some of you may remember the "Google Flu" effort, where the company was going to try to track outbreaks of influenza in the US by mining Google queries. There was never much clarification about what terms, exactly, they were going to flag as being indicative of someone coming down with the flu, but the hype (or hope) at the time was pretty strong:

Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. . .

So how'd that work out? Not so well. Despite a 2011 paper that seemed to suggest things were going well, the 2013 epidemic wrong-footed the Google Flu Trends (GFT) algorithms pretty thoroughly.

This article in Science finds that the real-world predictive power has been pretty unimpressive. And the reasons behind this failure are not hard to understand, nor were they hard to predict. Anyone who's ever worked with clinical trial data will see this one coming:

The initial version of GFT was a particularly problematic marriage of big and small data. Essentially, the methodology was to find the best matches among 50 million search terms to fit 1152 data points. The odds of finding search terms that match the propensity of the flu but are structurally unrelated, and so do not predict the future, were quite high. GFT developers, in fact, report weeding out seasonal search terms unrelated to the flu but strongly correlated to the CDC data, such as those regarding high school basketball. This should have been a warning that the big data were overfitting the small number of cases—a standard concern in data analysis. This ad hoc method of throwing out peculiar search terms failed when GFT completely missed the nonseasonal 2009 influenza A–H1N1 pandemic.

The Science authors have a larger point to make as well:

“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis. Elsewhere, we have asserted that there are enormous scientific possibilities in big data. However, quantity of data does not mean that one can ignore foundational issues of measurement and construct validity and reliability and dependencies among data. The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.

The quality of the data matters very, very, much, and quantity is no substitute. You can make a very large and complex structure out of toothpicks and scraps of wood, because those units are well-defined and solid. You cannot do the same with a pile of cotton balls and dryer lint, not even if you have an entire warehouse full of the stuff. If the individual data points are squishy, adding more of them will not fix your analysis problem; it will make it worse.

Since 2011, GFT has missed (almost invariably on the high side) for 108 out of 111 weeks. As the authors show, even low-tech extrapolation from three-week-lagging CDC data would have done a better job. But then, the CDC data are a lot closer to being real numbers. Something to think about next time someone's trying to sell you on a BIg Data project. Only trust the big data when the little data are trustworthy in turn.

Update: a glass-half-full response in the comments.

Comments (18) + TrackBacks (0) | Category: Biological News | Clinical Trials | Infectious Diseases

Nitrogen Heterocycles Ahoy

Email This Entry

Posted by Derek

Here's the sort of review that every working medicinal chemist will want to take a look at: Jeffrey Bode and graduate student Cam-Van T. Vo are looking at recent methods to prepare saturated nitrogen heterocycles. If you do drug discovery, odds are that you've worked with more piperidines, pyrrolidines, piperazines, morpholines, etc. than sticks can be shaken at. New ways to make substituted variations on these are always welcome, and it's good to see the latest work brought together into one place.

There's still an awful lot to do in this area, though. As the review mentions, a great many methods rely on nitrogen protecting groups. From personal experience, I can tell you that my heart sinks a bit when I see some nice ring-forming reaction in the literature and only then notice that the piperidine (or what have you) has a little "Ts" or "Ns" stuck to it. I know that these things can be taken off, but it's still a pain to do, and especially if you want to make a series of compounds. Protecting-group-free routes in saturated heterocyclic chemistry are welcome indeed.

Comments (9) + TrackBacks (0) | Category: Chemical News

Ezetimibe In the Marketplace

Email This Entry

Posted by Derek

Several years ago, the Schering-Plough cholesterol absorption inhibitor (Zetia, ezetimibe) and its combination pill with simvastatin (Vytorin) were the subject of a lot of puzzled controversy. A clinical trial (ENHANCE) looking at arterial wall thickness in patients with familial hypercholesteremia had unexpectedly shown little or no benefit, although statins themselves had worked in this population. This led to plenty of (still unresolved) speculation about the drug's mechanism of action, whether it really was going to be of benefit to the wider patient population, what this meant for the surrogate endpoint of LDL lowering (which the drug does accomplish), and so on.

Sales of both Zetia and Vytorin took a hit, naturally. But a new editorial in JAMA wonders why they're selling at all, and particularly, why they're doing so well in Canada. A new paper in the American Heart Journal shows that ezetimibe sales in the US went down 47% over the next year after the ENHANCE results came out. But in Canada, it just kept rolling along. (Even after the decline, though, it's still used more in the US).

What's causing this? Quite likely, an over-focus on cholesterol levels themselves:

Krumholz, one of the coauthors on the study with Jackevicius, remains perplexed as to the continuing popularity of ezetimibe. “The drug continues to defy gravity, and that’s probably a result of really strong marketing and the singular focus on cholesterol numbers,” he said.

Krumholz said heart health campaigns urging patients to “know your numbers” and treatment goals based on cholesterol measurements, such as getting asymptomatic individuals’ LDL-C levels below 130 mg/dL, have worked in ezetimibe’s favor at the expense of evidence-based medicine. “Is this the drug that lowers your LDL-C and helps you? We don’t know that,” he said. “The comfort of hitting a target offers little benefit if you don’t know that it is really protecting you.”

The funny thing is, all that emphasis on LDL assay numbers was supposed to be "evidence-based medicine". But that's the funny thing about science - the evidence keeps leading you in new directions.

Comments (4) + TrackBacks (0) | Category: Cardiovascular Disease

March 21, 2014

Dosing by Body Surface Area

Email This Entry

Posted by Derek

We were talking about allometry around here the other day, which prompts me to mention this paper. It used the reports of resveratrol dosing in animals, crudely extrapolated to humans, to argue that the body surface area normalization (BSA) method was a superior technique for dose estimation across species.

Over the years, though, the BSA method has taken some flak in the literature. It's most widely used in oncology, especially with cytotoxics, but there have been calls to move away from the practice, calling it a relic with little scientific foundation. (The rise of a very obese patient population has also led to controversy about whether body weight or surface area is a more appropriate dose-estimation method in those situations). At the same time, it's proven useful in some other situations, so it can't be completely ignored.

But it seems that the FASEB paper referenced in the first paragraph, which has been cited hundreds of times since 2008, may be overstating its conclusions. For example, it says that "BSA normalization of doses must be used to determine safe starting doses of new drugs because initial studies conducted in humans, by definition, lack formal allometric comparison of the pharmacokinetics of absorption, distribution, and elimination parameters", and cites its reference 13 for support. But when you go to that reference, you find that paper's authors concluding with things like this:

The customary use of BSA in dose calculations may contribute to the omission of these factors, give a false sense of accuracy and introduce error. It is questionable whether all current cancer treatment strategies are near optimal, or even ethical. BSA should be used for allometric scaling purposes in phase I clinical trials, as the scaling of toxicity data from animals is important for selecting starting doses in man, but the gradual discontinuation of BSA-based dosing of cytotoxic drugs in clinical practice is seemingly justified.

Citing a paper for support that flatly disagrees with your conclusions gets some points for bravado, but otherwise seems a bit odd. And there are others - that reference that I linked to in the second paragraph above, under "taken some flak", is cited in the FASEB paper as its reference 17, as something to do with choosing between various BSA equations. And it does address that, to be sure, but in the context of wondering whether the whole BSA technique has any clinical validity at all.

This is currently being argued out over at PubPeer, and it should be interesting to see what comes of it. I'll be glad to hear from pharmacokinetics and clinical research folks to see what they make of the whole situation.

Comments (17) + TrackBacks (0) | Category: Pharmacokinetics | The Scientific Literature