About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
July 9, 2014
Yesterday's post on yet another possible Alzheimer's blood test illustrates, yet again, that understanding statistics is not a strength of most headline writers (or most headline readers). I'm no statistician myself, but I have a healthy mistrust of numbers, since I deal with the little rotters all day long in one form or another. Working in science will do that to you: every result, ideally, is greeted with the hearty welcoming phrase of "Hmm. I wonder if that's real?"
A constant source for the medical headline folks is the constant flow of observational studies. Eating broccoli is associated with this. Chocolate is associated with that. Standing on your head is associated with something else. When you see these sorts of stories in the news, you can bet, quite safely, that you're not looking at the result of a controlled trial - one cohort eating broccoli while hanging upside down from their ankles, another group eating it while being whipped around on a carousel, while the control group gets broccoli-shaped rice puffs or eats the real stuff while being duct-taped to the wall. No, it's hard to get funding for that sort of thing, and it's not so easy to round up subjects who will stay the course, either. Those news stories are generated from people who've combed through large piles of data, from other studies, looking for correlations.
And those correlations are, as far as anyone can tell, usually spurious. Have a look at the 2011 paper by Young and Karr to that effect (here's a PDF). If you go back and look at the instances where observational effects in nutritional studies have been tested by randomized, controlled trials, the track record is not good. In fact, it's so horrendous that the authors state baldly that "There is now enough evidence to say what many have long thought: that any claim coming from an observational study is most likely to be wrong."
They draw the analogy between scientific publications and manufacturing lines, in terms of quality control. If you just inspect the final product rolling off the line for defects, you're doing it the expensive way. You're far better off breaking the whole flow into processes and considering each of those in turn, isolating problems early and fixing them, so you don't make so many defective products in the first place. In the same way, Young and Karr have this to say about the observational study papers:
Consider the production of an observational study: Workers – that is, researchers – do data collection, data cleaning, statistical analysis, interpretation, writing a report/paper. It is a craft with essentially no managerial control at each step of the process. In contrast, management dictates control at multiple steps in the manufacture of computer chips, to name only one process control example. But journal editors and referees inspect only the final product of the observational study production process and they release a lot of bad product. The consumer is left to sort it all out. No amount of educating the consumer will fix the process. No amount of teaching – or of blaming – the worker will materially change the group behaviour.
They propose a process control for any proposed observational study that looks like this:
Step 0: Data are made publicly available. Anyone can go in and check it if they like.
Step 1: The people doing the data collection should be totally separate from the ones doing the analysis.
Step 2: All the data should be split, right at the start, into a modeling group and a group used for testing the hypothesis that the modeling suggests.
Step 3: A plan is drawn up for the statistical treatment of the data, but using only the modeling data set, and without the response that's being predicted.
Step 4: This plan is written down, agreed on, and not modified as the data start to come in. That way lies madness.
Step 5: The analysis is done according to the protocol, and a paper is written up if there's one to be written. Note that we still haven't seen the other data set.
Step 6: The journal reviews the paper as is, based on the modeling data set, and they agree to do this without knowing what will happen when the second data set get looked at.
Step 7: The second data set gets analyzed according to the same protocol, and the results of this are attached to the paper in its published form.
Now that's a hard-core way of doing it, to be sure, but wouldn't we all be better off if something like this were the norm? How many people would have the nerve, do you think, to put their hypothesis up on the chopping block in public like this? But shouldn't we all?
+ TrackBacks (0) | Category: Clinical Trials | Press Coverage
April 7, 2014
Here's a good test for whatever news outlets you might be using for biotech information. How are they handling Pfizer's release of palbociclib information from the AACR meeting over the weekend?
Do a news search for the drug's name, and you'll see headline after headline. Many of them include the phrase "Promising Results". And from one standpoint, those words are justified. The drug showed a near-doubling in progression-free survival (PFS) when added to the standard of care, and you'd think that that has to be good. But a first analysis of overall survival (OS) shows no statistically significant improvement.
Now, how can that be? One possibility is that the drug helps hold advanced breast cancer back, until a population of cells breaks through - and when they do, it's a very fast-moving bunch indeed. Pfizer, for its part, is certainly hoping that further collection of data will start to show a real OS effect. They're going to need to - Avastin's provisional approval for breast cancer was based on earlier PFS numbers, which did not hold up when OS data came in. And that approval was revoked, as it should have been. Now, Avastin also had side effect issues, and quality-of-life issues, so these cases aren't directly comparable. But the FDA really wants to see a survival benefit, and that's what a new cancer drug really should offer. "You'll die at the same time, but with fewer tumors, and out more money" is not an appealing sales pitch. This issue has come up several times before, with other drugs, and it will come up again.
You'd think that a PFS effect like palbociclib's should translate into a real survival benefit, and as more data are added, it may well. But it's surely not going to be as impressive as people had hoped for, or it would have been apparent in the data we have. So take a look at the stories you're reading on the drug: if they mention this issue, good. If they just talk about what a promising drug for breast cancer palbociclib is, then that reporter (and that news outlet) is not providing the full story. (Here's one that does).
Update: there is an ongoing Phase III that's more specifically looking at overall survival. Its results will be awaited with great interest. . .
+ TrackBacks (0) | Category: Cancer | Press Coverage
February 25, 2014
Here's a nice look at why you should always think about the source of the financial and business information you read. It details the response to a recent Pfizer press release about palbociclib, a CDK inhibitor that's in late clinical trials.
Someone at The Wall Street Journal wrote that it had "the potential. . .to transform the standard of care for post-menopausal women with ER+ and HER2- advanced breast cancer." Problem is, that phrase was lifted directly out of the press release itself (and sure sounds like it), and you really would hope for better from the WSJ. What we're seeing here is actually Pfizer's own spin on the (as yet unpresented) results of the PALOMA-1 clinical trial. Everything a company says at this point will be couched in terms of "could" and "has the potential" and "we hope", and will come with one of those paragraphs at the end about "forward-looking statements". When it comes to the first statements about clinical trials results, if there are no numbers, there is nothing to talk about.
Paul Raeburn, the Knight Science Journalism blog author who picked up on this, also found that someone at the AP (and others) went for Pfizer's spin, too:
The problem is that this story was covered by business reporters rather than medical reporters, who by and large are too smart to fall for a company's claim about a drug without seeing the evidence presented, reviewed, and debated.
The further problem is that because they are so smart, medical writers mostly declined to cover this story. Which left the business writers out there alone, telling the story the company wanted them to tell.
Well, "medical writer" is a broad term, and believe me, there are some slackjaws in that crowd, too. But point taken - anyone who's been paying attention, or anyone who's willing to spend a few minutes on Google, should have realized that Pfizer is trying to make the case for accelerated approval of palbociclib, especially after the recent failure of dacomitinib and strong competition from Novartis in exactly the same therapeutic space.
Pfizer, of course, is not going to come out and talk about how delighted they are about the Phase II results unless they can back that up with something. I hope that palbociclib bowls people over - a new therapy for breast cancer would be good news. But we haven't seen the data yet, and data are all that will (or should) make pulses race over at the FDA. So I think that the Pfizer press release was worth noting, but stories like the Fierce Biotech one linked in the paragraph above are the way to do it. Put the news in context - don't just reword the press release.
+ TrackBacks (0) | Category: Cancer | Press Coverage
January 30, 2014
This morning I heard reports of formaldehyde being found in Charleston, West Virginia water samples as a result of the recent chemical spill there. My first thought, as a chemist, was "You know, that doesn't make any sense". A closer look confirmed that view, and led me to even more dubious things about this news story. Read on - there's some chemistry for a few paragraphs, and then near the end we get to the eyebrow-raising stuff.
The compound that spilled was (4-methylcyclohexane)methanol, abbreviated as 4-MCHM. That's its structure over there.
For the nonchemists in the audience, here's a chance to show how chemical nomenclature works. Those lines represent bonds between atoms, and if the atom isn't labeled with its own letter, it's a carbon (this compound has one one labeled atom, that O for oxygen). These sorts of carbons take four bonds each, and that means that there are a number of hydrogens bonded to them that aren't shown. You'd add one, two, or three hydrogens as needed to each to take each one up to four bonds.
The six-membered ring in the middle is "cyclohexane" in organic chemistry lingo. You'll note two things coming off it, at opposite ends of the ring. The small branch is a methyl group (one carbon), and the other one is a methyl group subsituted with an alcohol (OH). The one-carbon alcohol compound (CH3OH) is methanol, and the rules of chemical naming say that the "methanol-like" part of this structure takes priority, so it's named as a methanol molecule with a ring stuck to its carbon. And that ring has another methyl group, which means that its position needs to be specified. The ring carbon that has the "methanol" gets numbered as #1 (priority again), so the one with the methyl group, counting over, is #4. So this compound's full name is (4-methylcyclohexane)methanol.
I went into that naming detail because it turns out to be important. This spill, needless to say, was a terrible thing that never should have happened. Dumping a huge load of industrial solvent into a river is a crime in both the legal and moral senses of the word. Early indications are that negligence had a role in the accident, which I can easily believe, and if so, I hope that those responsible are prosecuted, both for justice to be served and as a warning to others. Handling industrial chemicals involves a great deal of responsibility, and as a working chemist it pisses me off to see people doing it so poorly. But this accident, like any news story involving any sort of chemistry, also manages to show how little anyone outside the field understands anything about chemicals at all.
I say that because among the many lawsuits being filed, there are some that show (thanks, Chemjobber!) that the lawyers appear to believe that the chemical spill was a mixture of 4-methylcyclohexane and methanol. Not so. This is a misreading of the name, a mistake that a non-chemist might make because the rest of the English language doesn't usually build up nouns the way organic chemistry does. Chemical nomenclature is way too logical and cut-and-dried to be anything like a natural language; you really can draw a complex compound's structure just by reading its name closely enough. This error is a little like deciding that a hairdryer must be a device made partly out of hair.
I'm not exaggerating. The court filing, by the law firm of Thompson and Barney, says explicitly:
30. The combination chemical 4-MCHM is artificially created by combining methylclyclohexane (sic) with methanol.
31. Two component parts of 4-MCHM are methylcyclohexane and methanol which are both known dangerous and toxic chemicals that can cause latent dread disease such as cancer.
Sure thing, guys, just like the two component parts of dogwood trees are dogs and wood. Chemically, this makes no sense whatsoever. Now, it's reasonable to ask if 4-MCHM can chemically degrade to methanol and 4-methylcyclohexane. Without going into too much detail, the answer is "No". You don't get to break carbon-carbon bonds that way, not without a lot of energy. If you ran the chemical (at high temperature) through some sort of catalytic cracking reactor at an oil refinery, you might be able to get something like that to happen (although I'd expect other things as well, probably all at the same time), but otherwise, no. For the same sorts of reasons, you're not going to be able to get formaldehyde out of this compound, either, not without similar conditions. Air and sunlight and water aren't going to do it, and if bacteria and fungi metabolize it, I'd expect things like (4-methylcyclohexane)carboxaldehyde and (4-methylcyclohexane)carboxylic acid, among others. I would not expect them to break off that single-carbon alcohol as formaldehyde.
So where does all this talk of formaldehyde come from? Well, one way that formaldehyde shows up is from oxidation of methanol, as shown in that reaction (this time I've drawn in all the hydrogens). This is, in fact, one of the reasons that methanol is toxic. In the body, it gets oxidized to formaldehyde, and that gets oxidized right away to formic acid, which shuts down an important enzyme. Exposure to formaldehyde itself is a different problem. It's so reactive that most cancers associated with exposure to it are in the upper respiratory tract; it doesn't get any further.
As that methanol oxidation reaction pathway shows, the body actually has ways of dealing with formaldehyde exposure, up to a point. In fact, it's found at low levels (around 20 to 30 nanograms/milliliter) in things like tomatoes and oranges, so we can assume that these exposure levels are easily handled. I am not aware of any environmental regulations on human exposure to orange juice or freshly cut tomatoes. So how much formaldehyde did Dr. Scott Simonton find in his Charleston water sample? Just over 30 nanograms per milliliter. Slightly above the tomato-juice level (27 ng/mL). For reference, the lowest amount that can be detected is about 6 ng/mL. Update: and the amount of formaldehyde in normal human blood is about 1 microgram/mL, which is over thirty times the levels that Simonton says he found in his water samples. This is produced by normal human metabolism (enzymatic removal of methyl groups and other reactions). Everyone has it. And another update: the amount of formaldehyde in normal human saliva can easily be one thousand times that in Simonton's water samples, especially in people who smoke or have cavities. If you went thousands of miles away from this chemical spill, found an untouched wilderness and had one of its natives spit in a collection vial, you'd find a higher concentration of formaldehyde.
But Simonton is a West Virginia water quality official, is he not? Well, not in this capacity. As this story shows, he is being paid in this matter by the law firm of Thompson and Barney to do water analysis. Yes, that's the same law firm that thinks that 4-MCHM is a mixture with methanol in it. And the water sample that he obtained was from the Vandalia Grille in Charleston, the owners of which are defendants in that Thompson and Barney lawsuit that Chemjobber found.
So let me state my opinion: this is a load of crap. The amounts of formaldehyde that Dr. Simonton states he found are within the range of ozonated drinking water as it is, and just above those of fresh tomato juice. These are levels that have never been shown to be harmful in humans. His statements about cancer and other harm coming to West Virginia residents seem to me to be irresponsible fear-mongering. The sort of irresponsible fear-mongering that someone might do if they're being paid by lawyers who don't understand any chemistry and are interested in whipping up as much panic as they can. Just my freely offered opinions. Do your own research and see what you think.
Update: I see that actual West Virginia public health officials agree.
Another update: I've had people point out that the mixture that spilled may have contained up to 1% methanol. But see this comment for why this probably doesn't have any bearing on the formaldehyde issue. Update, Jan 31: Here's the MSDS for the "crude MHCM" that was spilled. The other main constituent (4-methoxymethylcyclohexane)methanol is also unlikely to produce formaldehyde, for the same reasons given above. The fact remains that the levels reported (and sensationalized) by Dr. Simonton are negligible by any standard.
+ TrackBacks (0) | Category: Chemical News | Current Events | Press Coverage | Toxicology
January 20, 2014
Here's a long article from the Raleigh News and Observer (part one and part two) on the Eaton/Feldheim/Franzen dispute in nanoparticles, which some readers may already be familiar with (I haven't covered it on the blog myself). The articles are clearly driven by Franzen's continued belief that research fraud has been committed, and the paper makes the most of it.
The original 2004 publication in Science claimed that RNA solutions could influence the crystal form of palladium nanoparticles, which opened up the possibility of applying the tools of molecular biology to catalysts and other inorganic chemistry applications. Two more papers in JACS extended this to platinum and looked at in vitro evolutionary experiments. But even by 2005, Franzen's lab (who had been asked to join the collaboration between Eaton and Feldheim, who were now at Colorado and a startup company) was generating disturbing data: the original hexagonal crystals (a very strange and interesting form for palladium) weren't pure palladium at all - on an elemental basis, they were mostly carbon. (Later work showed that they were unstable crystals of (roughly) Pd(dba)3, with solvated THF. And they were produced just as well in the negative control experiments, with no RNA added at all.
N. C. State investigated the matter, and the committee agreed that the results were spurious. But they found Feldheim guilty of sloppy work, rather than fraud, saying he should have checked things out more thoroughly. Franzen continued to feel as if justice hadn't been done, though:
In fall 2009, he spent $1,334 of his own money to hire Mike Tadych, a Raleigh lawyer who specializes in public records law and who has represented The News & Observer. In 2010, the university relented and allowed Franzen into the room where the investigation records were locked away.
Franzen found the lab notebooks, which track experiments and results. As he turned the pages, he recognized that Gugliotti kept a thorough and well-organized record.
“I found an open-and-shut case of research fraud,” Franzen said.
The aqueous solution mentioned in the Science article? The experiments routinely used 50 percent solvent. The experiments only produced the hexagonal crystals when there was a high level of solvent, typically 50 percent or more. It was the solvent creating the hexagonal crystals, not the RNA.
On Page 43 of notebook 3, Franzen found what he called a “smoking gun.”
(Graduate student Lina) Gugliotti had pasted four images of hexagonal crystals, ragged around the edges. The particles were degrading at room temperature. The same degradation was present in other samples, she noted.
The Science paper claimed the RNA-templated crystals were formed in aqueous solution with 5% THF and were stable. NC State apparently offered to revoke Gugliotti's doctorate (and another from the group), but the article says that the chemistry faculty objected, saying that the professors involved should be penalized, not the students. The university isn't commenting, saying that an investigation by the NSF is still ongoing, but Franzen points out that it's been going on for five years now, a delay that has probably set a record. He's published several papers characterizing the palladium "nanocrystals", though, including this recent one with one of Eaton and Feldheim's former collaborators and co-authors. And there the matter stands.
It's interesting that Franzen pursued this all the way to the newspaper (known when I Iived in North Carolina by its traditional nickname of the Nuisance and Disturber). He's clearly upset at having joined what looked like an important and fruitful avenue of research, only to find out - rather quickly - that it was based on sloppy, poorly-characterized results. And I think what really has him furious is that the originators of the idea (Feldheim and Eaton) have tried, all these years, to carry on as if nothing was wrong.
I think, though, that Franzen is having his revenge whether he realizes it or not. It's coming up on ten years now since the original RNA nanocrystal paper. If this work were going to lead somewhere, you'd think that it would have led somewhere by now. But it doesn't seem to be. The whole point of the molecular-biology-meets-materials-science aspect of this idea was that it would allow a wide variety of new materials to be made quickly, and from the looks of things, that just hasn't happened. I'll bet that if you went back and looked up the 2005 grant application for the Keck foundation that Eaton, Feldheim (and at the time, Franzen) wrote up, it would read like an alternate-history science fiction story by now.
+ TrackBacks (0) | Category: Chemical News | Press Coverage | The Dark Side | The Scientific Literature
December 18, 2013
To go along with those nominations for worst press releases of the year, here's a roundup of stinkers in the biomedical field. And they do reek. I got some of these in my in-box as well, and I probably got even more of them than I remember. Sad to say, PR material (or at least the automated list variety) gets a very brief look from me. If there's a personal note to it, that shows that some thought went into the distribution, odds go up. But even then, I get people pitching me on all-natural coconut cure water and the like, apparently laboring under the idea that it's just the think that the readership here would like to hear about. (Those sometimes get "You don't seem to have every actually looked at the site. . ." replies from me).
As for the automated stuff, once in a while I'll actually click through to the release itself, but most of the time, I can tell from the subject line that it's not something that I need to be spending any effort on. My e-mail address has crept on to more and more lists with time, and it's amusing, in a grim way, to see the releasebots sending me automated PR notes on, say, the morning of Thanksgiving and other big news days like that (and no, that didn't appear to be an outlet outside the US in that case - I looked it over at the time, wondering who could be silly enough be sending it). I'll keep an eye out on Christmas and New Year's for the latest news.
+ TrackBacks (0) | Category: Press Coverage
December 17, 2013
In the same spirit as Adam Feuerstein's "Worst Biotech CEO" nominations from the other day, here's Michael Eisen asking what the worst scientific press releases of the year were. Most Overhyped and Most Egregious Failure to Cite Earlier Work are two especially hard categories to win. If you have some examples that particularly got under your skin this year, head on over.
+ TrackBacks (0) | Category: Press Coverage
October 11, 2013
The British press (and to a lesser extent, the US one) was full of reports the other day about some startling breakthrough in Alzheimer's research. We could certainly use one, but is this it? What would an Alzheimer's breakthrough look like, anyway?
Given the complexity of the disease, and the difficulty of extrapolating from its putative animal models, I think that the only way you can be sure that there's been a breakthrough in Alzheimer's is when you see things happening in human clinical trials. Until then, things are interesting, or suggestive, or opening up new possibilities, what have you. But in this disease, breakthroughs happen in humans.
This latest news is nowhere close. That's not to say it's not very interesting - it certainly is, and it doesn't deserve the backlash it'll get from the eye-rolling headlines the press wrote for it. The paper that started all this hype looked at mice infected with a prion disease, which led inexorably to neurodegeneration and death. They seem to have significantly slowed that degenerative cascade (details below), and that really is a significant result. The mechanism behind this, the "unfolded protein response" (UPR) could well be general enough to benefit a number of misfolded-protein diseases, which include Alzheimer's, Parkinson's, and Huntington's, among others. (If you don't have access to the paper, this is a good summary).
The UPR, which is a highly conserved pathway, senses an accumulation of misfolded proteins inside the endoplasmic reticulum. If you want to set it off, just expose the cells you're studying to Brefeldin A; that's its mechanism. The UPR has two main components: a shutdown of translation (and thus further protein synthesis), and an increase in chaperones to try to get the folding pathways back on track. (If neither of these do the trick, things will eventually shunt over to apoptosis, so the UPR can be seen as an attempt to avoid having the apoptotic detonator switch set off too often.
Shutting down translation causes cell cycle arrest, as well it might, and there's a lot of evidence that it's mediated by PERK, the Protein kinase RNA-like Endoplasmic Reticulum Kinase. The team that reported this latest result had previously shown that two different genetic manipulations of this pathway could mediate prion disease in what I think is the exact same animal model. If you missed the wild excited headlines when that one came out, well, you're not alone - I don't remember there being any. Is it that when something comes along that involves treatment with a small molecule, it looks more real? We medicinal chemists should take our compliments where we can get them.
That is the difference between that earlier paper and this new one. It uses a small-molecule PERK inhibitor (GSK2606414), whose discovery and SAR is detailed here. And this pharmacological PERK inhibition recapitulated the siRNA and gain-of-function experiments very well. Treated mice did show some behavioralthis really does look quite solid, and establishes the whole PERK end of the UPR as a very interesting field to work in.
The problem is, getting a PERK inhibitor to perform in humans will not be easy. That GSK inhibitor, unfortunately, has side effects that killed it as a development compound. PERK also seems to be a key component of insulin secretion, and in this latest study, the team did indeed see elevated blood glucose and pronounced weight loss, to the point that that treated mice eventually had to be sacrificed. Frustratingly, PERK inhibition might actually be a target to treat insulin resistance in peripheral tissue, so if you could just keep an inhibitor out of the pancreas, you might be in business. Good luck with that. I can't imagine how you'd do it.
But there may well be other targets in the PERK-driven pathways that are better arranged for us, and that, I'd think, is where the research is going to swing next. This is a very interesting field, with a lot of promise. But those headlines! First of all, prion disease is not exactly a solid model for Alzheimer's or Parkinson's. Since this pathway works all the way back at the stage of protein misfolding, it might be just the thing to uncover the similarities in the clinic, but that remains to be proven in human trials. There are a lot of things that could go wrong, many of which we probably don't even realize yet. And as just detailed above, the specific inhibitor being used here is strictly a tool compound all the way - there's no way it can go into humans, as some of the news stories got around to mentioning in later paragraphs. Figuring out something that can is going to take significant amount of effort, and many years of work. Headlines may be in short supply along the way.
+ TrackBacks (0) | Category: Press Coverage | The Central Nervous System
January 3, 2013
You may have seen some "wonder drug" news stories over the holiday break about compounds targeting p53 - many outlets picked up this New York Times story. The first paragraph probably got them:
For the first time ever, three pharmaceutical companies are poised to test whether new drugs can work against a wide range of cancers independently of where they originated — breast, prostate, liver, lung. The drugs go after an aberration involving a cancer gene fundamental to tumor growth. Many scientists see this as the beginning of a new genetic age in cancer research.
Now, to read that, you might think we're talking mutated p53, which is indeed found in a wide variety of cancers. It's the absolute first thing you think of when you think of a defective protein that's strongly associated with cancer. And everyone has been trying to target it for years and years now, for just that reason, but without too much success. If you know drug development, you might have seen this article and done what I did - immediately read on wondering who the heck it was with a broad-based p53 therapy and how you missed it.
That's when you find, though, that this is p53 and MDM2. MDM2 is one of those Swiss-army-knife proteins that interacts with a list of other important regulatory proteins as long as your leg. (Take a look at the last paragraph of that Wikipedia link and you'll see what I mean). Its relationship with p53 has been the subject of intense research for many yea