Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« In Silico | Infectious Diseases | Inorganic Chemistry »

July 25, 2014

The Antibiotic Gap: It's All of the Above

Email This Entry

Posted by Derek

Here's a business-section column at the New York Times on the problem of antibiotic drug discovery. To those of us following the industry, the problems of antibiotic drug discovery are big pieces of furniture that we've lived with all our lives; we hardly even notice if we bump into them again. You'd think that readers of the Times or other such outlets would have come across the topic a few times before, too, but there must always be a group for which it's new, no matter how many books and newspaper articles and magazine covers and TV segments are done on it. It's certainly important enough - there's no doubt that we really are going to be in big trouble if we don't keep up the arms race against the bacteria.

This piece takes the tack of "If drug discovery is actually doing OK, where are the new antibiotics?" Here's a key section:

Antibiotics face a daunting proposition. They are not only becoming more difficult to develop, but they are also not obviously profitable. Unlike, say, cancer drugs, which can be spectacularly expensive and may need to be taken for life, antibiotics do not command top dollar from hospitals. What’s more, they tend to be prescribed for only short periods of time.

Importantly, any new breakthrough antibiotic is likely to be jealously guarded by doctors and health officials for as long as possible, and used only as a drug of last resort to prevent bacteria from developing resistance. By the time it became a mass-market drug, companies fear, it could be already off patent and subject to competition from generics that would drive its price down.

Antibiotics are not the only drugs getting the cold shoulder, however. Research on treatments to combat H.I.V./AIDS is also drying up, according to the research at Yale, mostly because the cost and time required for development are increasing. Research into new cardiovascular therapies has mostly stuck to less risky “me too” drugs.

This mixes several different issues, unfortunately, and if a reader doesn't follow the drug industry (or medical research in general), then they may well not realize this. (And that's the most likely sort of reader for this article - people who do follow such things have heard all of this before). The reason that cardiovascular drug research seems to have waned is that we already have a pretty good arsenal of drugs for the most common cardiovascular conditions. There are a huge number of options for managing high blood pressure, for example, and they're mostly generic drugs by now. The same goes for lowering LDL: it's going to be hard to beat the statins, especially generic Lipitor. But there is a new class coming along targeting PCSK9 that is going to try to do just that. This is a very hot area of drug development (as the author of the Times column could have found without much effort), although the only reason it's so big is that PCSK9 is the only pathway known that could actually be more effective at lowering LDL than the statins. (How well it does that in the long term, and what the accompanying safety profile might be, are the subject of ongoing billion-dollar efforts). The point is, the barriers to entry in cardiovascular are, by now, rather high: a lot of good drugs are known that address a lot of the common problems. If you want to go after a new drug in the space, you need a new mechanism, like PCSK9 (and those are thin on the ground), or you need to find something that works against some of the unmet needs that people have already tried to fix and failed (such as stroke, a notorious swamp of drug development which has swallowed many large expeditions without a trace).

To be honest, HIV is a smaller-scale version of the same thing. The existing suite of therapies is large and diverse, and keeps the disease in check in huge numbers of patients. All sorts of other mechanisms have been tried as well, and found wanting in the development stage. If you want to find a new drug for HIV, you have a very high entry barrier again, because pretty most of the reasonable ways to attack the problem have already been tried. The focus now is on trying to "flush out" latent HIV from cells, which might actually lead to a cure. But no one knows yet if that's feasible, how well it will work when it's tried, or what the best way to do it might be. There were headlines on this just the other day.

The barriers to entry in the antibiotic field area similarly high, and that's what this article seems to have missed completely. All the known reasonable routes of antibiotic action have been thoroughly worked over by now. As mentioned here the other day, if you just start screening your million-compound libraries against bacteria to see what kills them, you will find a vast pile of stuff that will kill your own cells, too, which is not what you want, and once you've cleared those out, you will find a still-pretty-vast pile of compounds that work through mechanisms that we already have antibiotics targeting. Needles in haystacks have nothing on this.

In fact, a lot of not-so-reasonable routes have been worked over, too. I keep sending people to this article, which is now seven years old and talks about research efforts even older than that. It's the story of GlaxoSmithKline's exhaustive antibiotics research efforts, and it also tells you how many drugs they got out of it all in the end: zip. Not a thing. From what I can see, the folks who worked on this over the last fifteen or twenty years at AstraZeneca could easily write the same sort of article - they've published all kinds of things against a wide variety of bacterial targets, and I don't think any of it has led to an actual drug.

This brings up another thing mentioned in the Times column. Here's the quote:

This is particularly striking at a time when the pharmaceutical industry is unusually optimistic about the future of medical innovation. Dr. Mikael Dolsten, who oversees worldwide research and development at Pfizer, points out that if progress in the 15 years until 2010 or so looked sluggish, it was just because it takes time to figure out how to turn breakthroughs like the map of the human genome into new drugs.

Ah, but bacterial genomes were sequenced before the human one was (and they're more simple, at that). Keep in mind also that proof-of-concept for new targets can be easier to obtain in bacteria (if you manage to find any chemical matter, that is). I well recall talking with a bunch of people in 1997 who were poring over the sequence data for a human pathogen, fresh off the presses, and their optimism about all the targets that they were going to find in there, and the great new approaches they were going to be able to take. They tried it. None of it worked. Over and over, none of it worked. People had a head start in this area, genomically speaking, with an easier development path than many other therapeutic areas, and still nothing worked.

So while many large drug companies have exited antibiotic research over the years, not all of them did. But the ones that stayed have poured effort and money, over and over, down a large drain. Nothing has come out of the work. There are a number of smaller companies in the space as well, for whom even a small success would mean a lot, but they haven't been having an easy time of it, either.

Now, one thing the Times article gets right is that the financial incentives for new antibiotics are a different thing entirely than the rest of the drug discovery world. Getting one of these new approaches in LDL or HIV to work would at least be highly profitable - the PCSK9 competitors certainly are working on that basis. Alzheimer's is another good example of an area that has yielded no useful drugs whatsoever despite ferocious amounts of effort, but people keep at it because the first company to find a real Alzheimer's drug will be very well rewarded indeed. (The Times article says that this hasn't been researched enough, either, which makes me wonder what areas have been). But any great new antibiotic would be shelved for emergencies, and rightly so.

But that by itself is not enough to explain the shortage of those great new antibiotics. It's everything at once: the traditional approaches are played out and the genomic-revolution stuff has been tried, so the unpromising economics makes the search for yet another approach that much harder.

Note: be sure to see the comments for perspectives from others who've also done antibiotic research, including some who disagree. I don't think we'll find anyone who says it's easy, though, but you never know.

Comments (56) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Infectious Diseases

July 21, 2014

The Hep C Field Gets Nastier By the Minute

Email This Entry

Posted by Derek

What a mess there is in the hepatitis C world. Gilead is, famously, dominating the market with Sovaldi, whose price has set off all sorts of cost/benefit debates. The companies competing with them are scrambling to claim positions, and the Wall Street Journal says that AbbVie is really pulling out all the stops. Try this strategy on for size:

In a lawsuit filed in February, AbbVie noted it patented the idea of combining two of Gilead's drugs—Sovaldi and an experimental drug called ledipasvir, which Gilead plans to combine into one treatment—and is therefore entitled to monetary damages if Gilead brings the combination pill to market. Legally, AbbVie can't market Sovaldi or ledipasvir because it doesn't have the patents on the underlying compounds. But it is legal for companies to seek and obtain patents describing a particular "method of use" of products that don't belong to them.

Gilead disputes the claims of AbbVie and the other companies. A spokeswoman said Gilead believes it has the sole right to commercialize Sovaldi and products containing Sovaldi's active ingredient, known as sofosbuvir. An AbbVie spokeswoman said the company believes Gilead infringes its patents, and that it stands behind the validity and enforceability of those patents.

You don't see that very often, and it's a good thing. Gilead is, naturally, suing Abbvie over this as well, saying that Abbvie has knowing mispresented to the USPTO that they invented the Gilead therapies. I'm not sure how that's going to play out: Abbvie didn't have to invent the drugs to get a method-of-use patent on them. At the same time, I don't know what sort of enablement Abbvie's patent claims might have behind them, given that these are, well, Gilead's compounds. The company is apparently claiming that a "sophisticated computer model" allows them to make a case that these combinations would be the effective ones, but I really don't know if that's going to cut it (and in fact, I sort of hope it doesn't). But even though I'm not enough of a patent-law guy to say either way, I'm enough of one to say, with great confidence, that this is going to be a very expensive mess to sort out. Gilead's also in court with Merck (and was with Idenix before Merck bought them), and with Roche, and will probably be in court with everyone else before all this is over.

This whole situation reminds me of one of those wildlife documentaries set around a shrinking African watering hole. A lot of lucrative drugs have gone off patent over the last few years, and a lot of them are heading that way soon. So any new therapeutic area with a lot of commercial promise is going to get a lot of attention, and start a lot of fighting. Legal battles aren't cheap on the absolute scale, but on the relative scale of the potential profits, they are. So why not? Claim this, claim that, sue everybody. It might work; you never know. Meanwhile, we have a line forming on the right of ticked-off insurance companies and government health plans, complaining about the Hep C prices, and while they wait they can watch the companies involved throwing buckets of slop on each other and hitting everyone over the head with lawsuits. What a spectacle.

Comments (43) + TrackBacks (0) | Category: Business and Markets | Infectious Diseases | Patents and IP | Why Everyone Loves Us

May 29, 2014

The Price of Sovaldi

Email This Entry

Posted by Derek

John LaMattina has a good post about Gilead, their HCV drug Sovaldi, and the price that the company is charging. Most readers here will be familiar with the situation: Sovaldi has a very high cure rate for hepatitis C, but in the US it costs $84,000 per patient. Insurance companies, in some cases, are pushing back at that price, but LaMattina says to run the numbers, in a question to the head of the insurance trade association:

Sovaldi is a drug that cures hepatitis C. It actually SAVES the healthcare system money in that it will prevent patients from dying from liver cancer, cirrhosis and liver failure. Liver transplants alone can cost $300,000 and then patients must take anti-rejection drugs that cost $40,000 per year for the rest of their lives. The price of Sovaldi, while high now, will drop, first when competitive drugs in late stage development reach the market and then when the drug is generic. Given all of this, what price for Sovaldi would have been acceptable to you – $60,000, $40,000, $10,000? What price are you willing to pay for innovation?

He didn't get an answer to that one, as you can well imagine. But it's a worthwhile question. There are, I'm sure, hepatitis C patients who die of other things before they ever start costing the kinds of money that LaMattina correctly cites for liver transplants. I don't have those figures, but if anyone does, it's the insurance companies, and they may believe that Sovaldi is still not cost-effective. Or (and these are not mutually exclusive explanations) they may be pushing back because that's what they feel they have to do - that otherwise all sorts of companies will push up prices ever more than they do already.

This is just another illustration of the walls that are closing in on the whole drug-discovery business - fewer drugs, higher costs to develop them, higher drug prices, more pushback from the payers. It's been clear for a long time that this can't go on forever, but what might replace it isn't clear (and probably won't be until the situation gets much tighter). I say that because although drug prices are surely going up, the insurance companies are still paying out. They complain, but they pay. We'll know that the real crisis is at hand when a new drug gets flatly rejected for reimbursement by everyone involved. But will that ever happen in quite that way? Keep in mind that drug companies carefully set their own prices according to what they think the market will bear. Gilead surely knew that their price for Sovaldi would be unpopular. But they probably also figured that it would hold.

Pretty much every other industry does this sort of thing, but Health Care Is Different, as always. I had a crack at explaining why I think that is here: in short, we think about health expenses differently than we think about almost any other expense, and I don't think that's ever going to change. But drug prices will continue to test the limits of the insurance companies to write the checks, as long as those checks keep getting written.

Comments (41) + TrackBacks (0) | Category: Drug Prices | Infectious Diseases

May 16, 2014

The Real Numbers on Tamiflu

Email This Entry

Posted by Derek

I've been meaning to cover this controversy about Tamiflu (oseltamivir). The Cochrane group has reviewed all the clinical data obtainable on the drug's efficacy, and has concluded that it doesn't have much. That's in contrast to an earlier review they'd conducted in 2008, which said that, overall, the evidence was slightly positive.

But as Ben Goldacre details in that Guardian piece, a comment left on the Cochrane paper pointed out that the positive conclusions were almost entirely due to one paper. That one summarized ten clinical studies, but only two of the ten had ever appeared in the literature. And this sent the Cochrane Collaboration on a hunt to find the rest of the data, which turned out to be no simple matter:

First, the Cochrane researchers wrote to the authors of the Kaiser paper. By reply, they were told that this team no longer had the files: they should contact Roche. Here the problems began. Roche said it would hand over some information, but the Cochrane reviewers would need to sign a confidentiality agreement. This was tricky: Cochrane reviews are built around showing their working, but Roche's proposed contract would require them to keep the information behind their reasoning secret from readers. More than this, the contract said they were not allowed to discuss the terms of their secrecy agreement, or publicly acknowledge that it even existed. . .Then, in October 2009, the company changed tack. It would like to hand over the data, it explained, but another academic review on Tamiflu was being conducted elsewhere. Roche had given this other group the study reports, so Cochrane couldn't have them.

And so on and very much so on. Roche's conduct here appears shameful, and just the sort of thing that has lowered the public opinion of the entire pharma industry. And not just the public opinion: it's lowered the industry in the eyes of legislators and regulators, who have even more direct power to change the way pharma does business. Over the years, we've been seeing a particularly nasty Tragedy of the Commons - each individual company, when they engage in tactics like this to product an individual drug, lowers the general standing of the industry a bit more, but no one company has the incentive to worry about that common problem. They have more immediate concerns.

So what about Tamiflu? After years of wrangling, the data finally emerged, and they're not all that impressive:

So does Tamiflu work? From the Cochrane analysis – fully public – Tamiflu does not reduce the number of hospitalisations. There wasn't enough data to see if it reduces the number of deaths. It does reduce the number of self-reported, unverified cases of pneumonia, but when you look at the five trials with a detailed diagnostic form for pneumonia, there is no significant benefit. It might help prevent flu symptoms, but not asymptomatic spread, and the evidence here is mixed. It will take a few hours off the duration of your flu symptoms.

I've never considered it much of a drug, personally, and that's without any access to all this hard-to-get data. One of the biggest raps on oseltamivir is that it has always appeared to be most effective if it could be taken after you've been infected, but before you know you're sick. That's not a very useful situation for the real world, since a person can come down with the flu any time at all during the winter. Goldacre again:

Roche has issued a press release saying it contests these conclusions, but giving no reasons: so now we can finally let science begin. It can shoot down the details of the Cochrane review – I hope it will – and we will edge towards the truth. This is what science looks like. Roche also denies being dragged to transparency, and says it simply didn't know how to respond to Cochrane. This, again, speaks to the pace of change. I have no idea why it was withholding information: but I rather suspect it was simply because that's what people have always done, and sharing it was a hassle, requiring new norms to be developed. That's reassuring and depressing at the same time.

That sounds quite likely. No one wants to be the person who sets a new precedent in dealing with clinical data, especially not at a company the size of Roche, so what we might have here is yet another tragedy of the commons: it would have been in the company's best interest to have not gone through this whole affair, but there may have been no one person there who felt as if they were in any position to do something about it. When in doubt, go with the status quo: that's the unwritten rule, and the larger the organization, the stronger it holds. After all, if it's a huge, profitable company, the status quo clearly has a lot going for it, right? It's worked so far - who are you, or that guy over there, to think about rearranging it?

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Infectious Diseases | Why Everyone Loves Us

May 15, 2014

The Daily Show on Finding New Antibiotics

Email This Entry

Posted by Derek

A reader sent along news of this interview on "The Daily Show" with Martin Blaser of NYU. He has a book out, Missing Microbes, on the overuse of antibiotics and the effects on various microbiomes. And I think he's got a lot of good points - we should only be exerting selection pressure where we have to, not (for example) slapping triclosan on every surface because it somehow makes consumers feel "germ-free". And there are (and always have been) too many antibiotics dispensed for what turn out to be viral infections, for which they will, naturally, do no good at all and probably some harm.

But Dr. Blaser, though an expert on bacteria, does not seem to be an expert on discovering drugs to kill bacteria. I've generated a transcript of part of the interview, starting around the five-minute mark, which went like this:

Stewart: Isn't there some way, that, the antibiotics can be used to kill the strep, but there can be some way of rejuvenating the microbiome that was doing all those other jobs?

Blaser: Well, that's what we need to do. We need to make narrow-spectrum antibiotics. We have broad-spectrum, that attack everything, but we have the science that we could develop narrow-spectrum antibiotics that will just target the one organism - maybe it's strep, maybe it's a different organism - but then we need the diagnostics, so that somebody going to the doctor, they say "You have a virus" "You have a bacteria", if you have a bacteria, which one is it?

Stewart: Now isn't this where the genome-type projects are going? Because finding the genetic makeup of these bacteria, won't that allow us to target these things more specifically?

Blaser Yeah. We have so much genomic information - we can harness that to make better medicine. . .

Stewart: Who would do the thing you're talking about, come up with the targeted - is it drug companies, could it, like, only be done through the CDC, who would do that. . .

Blaser: That's what we need taxes for. That's our tax dollars. Just like when we need taxes to build the road that everybody uses, we need to develop the drugs that our kids and our grandkids are going to use so that these epidemics could be stopped.

Stewart: Let's say, could there be a Manhattan Project, since that's the catch-all for these types of "We're going to put us on the moon" - let's say ten years, is that a realistic goal?

Blaser: I think it is. I think it is. We need both diagnostics, we need narrow-spectrum agents, and we have to change the economic base of how we assess illness in kids and how we treat kids and how we pay doctors. . .

First off, from a drug discovery perspective, a narrow-spectrum antibiotic, one that kills only (say) a particular genus of bacterium, has several big problems: it's even harder to discover than a broader-spectrum agent, its market is much smaller, it's much harder to prescribe usefully, and its lifetime as a drug is shorter. (Other than that, it's fine). The reasons for these are as follows:

Most antibiotic targets are enzyme systems peculiar to bacteria (as compared to eukaryotes like us), but such targets are shared across a lot of bacteria. They tend to be aimed at things like membrane synthesis and integrity (bacterial membranes are rather different than those of animals and plants), or target features of DNA handling that are found in different forms due to bacteria having no nuclei, and so on. Killing bacteria with mechanisms that are also found in human cells is possible, but it's a rough way to go: a drug of that kind would be similar to a classic chemotherapy agent, killing the fast-dividing bacteria (in theory) just before killing the patient.

So finding a Streoptococcus-only drug is a very tall order. You'd have to find some target-based difference between those bacteria and all their close relatives, and I can tell you that we don't know enough about bacterial biochemistry to sort things out quite that well. Stewart brings up genomic efforts, and points to him for it, because that's a completely reasonable suggestion. Unfortunately, it's a reasonable suggestion from about 1996. The first complete bacterial genomes became available in the late 1990s, and have singularly failed to produce any new targeted antibiotics whatsoever. The best reference I can send people to is the GSK "Drugs For Bad Bugs" paper, which shows just what happened (and not just at GSK) to the new frontier of new bacterial targets. Update: see also this excellent overview. A lot of companies tried this, and got nowhere. It did indeed seem possible that sequencing bacteria would give us all sorts of new ways to target them, but that's not how it's worked out in practice. Blaser's interview gives the impression that none of this has happened yet, but believe me, it has.

The market for a narrow-spectrum agent would necessarily be smaller, by design, but the cost of finding it would (as mentioned above) be greater, so the final drug would have to cost a great deal per dose - more than health insurance would want to pay, given the availability of broad-spectrum agents at far lower prices. It could not be prescribed without positively identifying the infectious agent - which adds to the cost of treatment, too. Without faster and more accurate ways to do this (which Blaser rightly notes as something we don't have), the barriers to developing such a drug are even higher.

And the development of resistance would surely take such a drug out of usefulness even faster, since the resistance plasmids would only have to spread between very closely related bacteria, who are swapping genes at great speed. I understand why Blaser (and others) would like to have more targeted agents, so as not to plow up the beneficial microbiome every time a patient is treated, but we'd need a lot of them, and we'd need new ones all the time. This in a world where we can't even seem to discover the standard type of antibiotic.

And not for lack of trying, either. There's a persistent explanation for the state of antibiotic therapy that blames drug companies for supposedly walking away from the field. This has the cause and effect turned around. It's true that some of them have given up working in the area (along with quite a few other areas), but they left because nothing was working. The companies that stayed the course have explored, in great detail and at great expense, the problem that nothing much is working. If there ever was a field of drug discovery where the low-hanging fruit has been picked clean, it is antibiotic research. You have to use binoculars to convince yourself that there's any more fruit up there at all. I wish that weren't so, very much. But it is. Bacteria are hard to kill.

So the talk later on in the interview of spending some tax dollars and getting a bunch of great new antibiotics in ten years is, unfortunately, a happy fantasy. For one thing, getting a single new drug onto the market in only ten years from the starting pistol is very close to impossible, in any therapeutic area. The drug industry would be in much better shape if that weren't so, but here we are. In that section, Jon Stewart actually brings to life one of the reasons I have this blog: he doesn't know where drugs come from, and that's no disgrace, because hardly anyone else knows, either.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

March 24, 2014

Google's Big Data Flu Flop

Email This Entry

Posted by Derek

Some of you may remember the "Google Flu" effort, where the company was going to try to track outbreaks of influenza in the US by mining Google queries. There was never much clarification about what terms, exactly, they were going to flag as being indicative of someone coming down with the flu, but the hype (or hope) at the time was pretty strong:

Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. . .

So how'd that work out? Not so well. Despite a 2011 paper that seemed to suggest things were going well, the 2013 epidemic wrong-footed the Google Flu Trends (GFT) algorithms pretty thoroughly.

This article in Science finds that the real-world predictive power has been pretty unimpressive. And the reasons behind this failure are not hard to understand, nor were they hard to predict. Anyone who's ever worked with clinical trial data will see this one coming:

The initial version of GFT was a particularly problematic marriage of big and small data. Essentially, the methodology was to find the best matches among 50 million search terms to fit 1152 data points. The odds of finding search terms that match the propensity of the flu but are structurally unrelated, and so do not predict the future, were quite high. GFT developers, in fact, report weeding out seasonal search terms unrelated to the flu but strongly correlated to the CDC data, such as those regarding high school basketball. This should have been a warning that the big data were overfitting the small number of cases—a standard concern in data analysis. This ad hoc method of throwing out peculiar search terms failed when GFT completely missed the nonseasonal 2009 influenza A–H1N1 pandemic.

The Science authors have a larger point to make as well:

“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis. Elsewhere, we have asserted that there are enormous scientific possibilities in big data. However, quantity of data does not mean that one can ignore foundational issues of measurement and construct validity and reliability and dependencies among data. The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.

The quality of the data matters very, very, much, and quantity is no substitute. You can make a very large and complex structure out of toothpicks and scraps of wood, because those units are well-defined and solid. You cannot do the same with a pile of cotton balls and dryer lint, not even if you have an entire warehouse full of the stuff. If the individual data points are squishy, adding more of them will not fix your analysis problem; it will make it worse.

Since 2011, GFT has missed (almost invariably on the high side) for 108 out of 111 weeks. As the authors show, even low-tech extrapolation from three-week-lagging CDC data would have done a better job. But then, the CDC data are a lot closer to being real numbers. Something to think about next time someone's trying to sell you on a BIg Data project. Only trust the big data when the little data are trustworthy in turn.

Update: a glass-half-full response in the comments.

Comments (18) + TrackBacks (0) | Category: Biological News | Clinical Trials | Infectious Diseases

March 18, 2014

Another DNA-Barcoded Program From GSK

Email This Entry

Posted by Derek

Two more papers have emerged from GSK using their DNA-encoded library platform. I'm always interested to see how this might be working out. One paper is on compounds for the tuberculosis target InhA, and the other is aimed at a lymphocyte protein-protein target, LFA-1. (I've written about this sort of thing previously here, here, and here).

Both of these have some interesting points - I'll cover the LFA-1 work in another post, though. InhA, for its part, is the target of the well-known tuberculosis drug isoniazid, and it has had (as you'd imagine) a good amount of attention over the years, especially since it's not the cleanest drug in the world (although it sure beats having tuberculosis). It's known to be a prodrug for the real active species, and there are also some nasty resistant strains out there, so there's certainly room for something better.
InhA.png
In this case, the GSK group apparently screened several of their DNA-encoded libraries against the target, but the paper only details what happened with one of them, the aminoproline scaffold shown. That would seem to be a pretty reasonable core, but it was one of 22 diamino acids in the library. R1 was 855 different reactants (amide formation, reductive amination, sulfonamides, ureas), and R2 was 857 of the same sorts of things, giving you, theoretically, a library of over 16 million compounds. (If you totaled up the number across the other DNA-encoded libraries, I wonder how many compounds this target saw in total?) Synthesizing a series of hits from this group off the DNA bar codes seems to have worked well, with one compound hitting in the tens of nanomolar range. (The success rate of this step is one of the things that those of us who haven't tried this technique are very interested in hearing about).
InhA2.png
They even pulled out an InhA crystal structure with the compound shown, which really makes this one sound like a poster-child example of the whole technique (and might well be why we're reading about it in J. Med. Chem.) The main thing not to like about the structure is that it has three amides in it, but this is why one runs PK experiments, to see if having three amides is going to be a problem or not. A look at metabolic stability showed that it probably wasn't a bad starting point. Modifying those three regions gave them a glycine methyl ester at P1, which had better potency in both enzyme and cell assays. When you read through the paper, though, it appears that the team eventually had cause to regret having pursued it. A methyl ester is always under suspicion, and in this case it w