About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
September 30, 2011
Now this is the sort of thing we don't need. Nature News reports on a court case that I missed, Classen Immunotherapies, Inc. v. Biogen IDEC et al.. This is yet another run at the "can you patent natural phenomena?" argument, which seems to be an ever-upwelling spring for patent litigation:
The court upheld a lawsuit filed by Classen Immunotherapies of Baltimore, Maryland, against four biotechnology companies and a medical group, for infringing on a patent that covered the idea of trying to link infant vaccination with later immune disorders. A district court had thrown out the lawsuit, finding that the concept at the heart of the case amounted to an abstract idea that could not be patented. The appeals court found otherwise.
Beyond its complex particulars, the case sets "a troubling precedent", says James Bessen, a lawyer at the Boston University School of Law, Massachusetts, "because you're patenting something that's very broad". (The patents include the act of reading the published scientific literature and using it to create vaccination schedules that minimize immune disorders.)
Oh, I particularly like that last part. I have to congratulate whoever thought of putting it it into a patent application, in sort of the same spirit that one has to congratulate the Mongol hordes on their horsemanship. The facts of the case are a bit more complicated, though, as it always the case in the law. Originally, a district court had indeed ruled (by summary judgment) that the Classen claims were unpatentable, and the CAFC had gone along with that. But the Supreme Court vacated the decision, in light of a 2010 case (Bilski v. Kappos, if you're into this stuff), and remanded it back down to the CAFC. The guidance seems to have been to not make any broad statements about patentability, if possible, and to consider each case narrowly as. On reconsideration, they now find that two of the three claims under dispute are "patent eligible", but they're leaving the door wide open for them to be contested on other grounds:
We conclude that the claimed subject matter of two of the three Classen patents is eligible under §101 to be considered for patenting, although we recognize that the claims may not meet the substantive criteria of patentability as set forth in §102, §103, and §112 of Title 35. However, questions of patent validity are not before us on this appeal. . .
If you want more, there's a detailed look at the decision over at Patent Baristas.
But the broader problem is still with us. Prof. Bessen has just recently published a study of "patent trolls", the organizations that buy up as many (preferably broadly worded) patents as they can, and then go looking for people to sue. That business model (which seems to be more and more popular, damn it all, has always put me in mind of one of the characters in the old Pogo comic strip - when approached with a sure-fire way for him to make money in the advertising business, he responded with a panicked "No, no! I can always rob graves!"
Nature News is incorrect, though, when they imply that the biotech industry has been free of these things up until now. There may have been fewer of the real "non-producing entities", the firms that have nothing but a pile of bought-up patents and some hungry lawyers, but we have had our share of people trying to get rich by enforcing crazily over-broad patents. In the earlier days of this blog, I spent a good amount of time chronicling Ariad's attempt to sue everyone in sight over claims to the Nf-kappa-B pathway that basically covered everything out to the asteroid belt. That one only took about eight full years of lawyering and who knows how much money and effort to be disposed of. And back in the 1990s, I recall that there was some guy with a patent claiming huge swaths of space around the concept of cell-based assays, who was shaking everyone down that he could find.
Some other high-tech areas are infested with this sort of thing. I find that I cannot, for example, listen to any presentation by Nathan Myhrvold - who is otherwise an intelligent and interesting person - without thinking about how he runs Intellectual Ventures, a patent-trolling firm in its purest form. As it happens, the Nature News piece mentions that Myhrvold's minions own hundreds of biotech patents. They claim that they have no plans to litigate in the field, but why on earth would they buy these things up, otherwise? And as a comment to the news article points out, that denial may be disingenuous, because IV's usual procedure is to set up some other entity to do the suing, after first having sold it the rights to the patents at issue. As that extremely interesting NPR story goes on to detail, those other entities tend to be in Marshall, Texas:
The office was in a corridor where all the other doors looked exactly the same —locked, nameplates over the door, no light coming out. It was a corridor of silent, empty offices with names like "Software Rights Archive," and "Bulletproof Technology of Texas."
It turns out a lot of those companies in that corridor, maybe every single one of them, is doing exactly what Oasis Research is doing. They appear to have no employees. They are not coming up with new inventions. The companies are in Marshall, Texas because they are filing lawsuits for patent infringement.
So this is how people make money these days: not by inventing anything themselves, but by buying up other people's work - dubious or valid, it doesn't much matter - and suing people. From empty offices over in ArkLaTex country. And we biopharma people could well find ourselves spending more and more of our own time and money fighting this stuff off, because hey, why not?
+ TrackBacks (0) | Category: Patents and IP
September 29, 2011
Here's an excellent post-mortem on the whole XMRV chronic-fatigue controversy, which I think almost everyone can agree is now at an end. The latest results are from a large blinded effort to detect the virus across a variety of patient sample (and across a number of labs) - and it's negative. The paper that started all the furor has been partially retracted. As far as I can see, the story is over.
But Judy Mikovits of the Whittemore Peterson Institute for Neuro-Immune Disease (WPI) in Nevada, whose work started all this off, is still a believer. And so, as you might imagine, are many patients:
Mikovits has become something of a savior in the community of people with CFS (also known as myalgic encephalomyelitis, or ME), who for decades have endured charges that the disease is psychosomatic. The 2009 Science paper shouted out that CFS may well have a clear biological cause, and, in turn, raised hopes of effective treatments and even a cure. The new findings give her “great pause,” yet she suspects they're but a speed bump. “I haven't changed my thinking at all,” she says. And she worries that the Blood Working Group conclusions will confuse people with CFS, some of whom got wind of the results early in the blogosphere and contacted her in a panic. “I had 15 suicidal patients call me last week,” she says.
In scientific circles, Mikovits has developed a less flattering reputation. Critics have accused her and her backers of stubbornly wedding themselves to a thesis and moving the goalposts with each study that challenges their conclusions. Even disease advocates who welcome the attention XMRV has brought to CFS believe the time has come to put this line of research to rest. “It's hard to say that this has not received a fair appraisal,” says Kimberly McCleary, president of the CFIDS Association of America, a patient group in Charlotte, North Carolina.
At the worst extreme, you get things like this. Note that that post's comment section filled up with people doubting, very vocally, that any such thing was going on, and sometime hinting at big conspiracies to keep the truth from being heard. I'll be a bit disappointed if some more of that doesn't attach to this post as well.
But while I can see why patients in this area are frustrated beyond words, and desperately hoping for something to help them, they're going to have to deal with what every scientist deals with: the indifference of the universe to what we want it to provide. Blind alleys there are beyond counting, wasted effort there is beyond measuring, in trying to understand a disease. We're used to, as humans, seeing agency and design when something seems so well hidden and so complex - in this case, malevolent design. But just as I reject the intelligent design hypothesis to explain what looks benevolent, I reject it for what sometimes looks like an evil practical joke: the perverse difficulties of biomedical research.
+ TrackBacks (0) | Category: Infectious Diseases
If you want to know what it was like during the height of the genomics frenzy, here's a quote for you, from an old Adam Feuerstein post. Return with me to the year 2000:
During his presentation Wednesday, Mark Levin, the very enthusiastic CEO of Millennium Pharmaceuticals (MLNM), remarked that his company's gene-to-patient technology would turbo-charge drug-development productivity to levels never before seen in the industry. Just how productive? Well, he predicted that by 2005, Millennium would be pushing one or two new drugs every year onto the market, while keeping the pipeline brimming with at least five experimental drugs entering human clinical trials every year.
Note that I'm not trying to make fun of Millennium, or of Mark Levin (still helping to found new companies at Third Rock Ventures). A lot of people were talking the same way back then - although, to be sure, Feuerstein notes that many people in the audience had trouble believing this one, too. But there's no doubt that a wild kind of optimism was in the air then. (Here's another Levin interview from that era).
That's as opposed to the wild kind of pessimism that's in the air these days. Here's hoping that it turns out to be as strange, in retrospect, as this earlier craziness. And yes, I know that the current reasons for pessimism are, in fact, rather bonier and more resilient than the glowing clouds of the genomics era were. But it's still possible to overdo it. Right?
+ TrackBacks (0) | Category: Drug Industry History
September 28, 2011
The last time I talked here at length about Andy Grove, ex-Intel CEO, I was rather hard on him, not that I imagine that I ruined his afternoon much. And in the same vein, I recently gave his name to the fallacy that runs like this: other high-tech R&D sector X is doing better than the pharmaceutical business is. Therefore the drug industry should do what those other businesses do, and things will be better. In Grove's original case, X was naturally "chip designers like Intel", and those two links above will tell you what I think of that analogy. (Hint: not too much).
But Grove has an editorial in Science with a concrete suggestion about how things could be done differently in clinical research. Specifically, he's looking at the ways that large outfits like Amazon manage their customer databases, and wonders about applying that to clinical trial management. Here's the key section:
Drug safety would continue to be ensured by the U.S. Food and Drug Administration. While safety-focused Phase I trials would continue under their jurisdiction, establishing efficacy would no longer be under their purview. Once safety is proven, patients could access the medicine in question through qualified physicians. Patients' responses to a drug would be stored in a database, along with their medical histories. Patient identity would be protected by biometric identifiers, and the database would be open to qualified medical researchers as a “commons.” The response of any patient or group of patients to a drug or treatment would be tracked and compared to those of others in the database who were treated in a different manner or not at all. These comparisons would provide insights into the factors that determine real-life efficacy: how individuals or subgroups respond to the drug. This would liberate drugs from the tyranny of the averages that characterize trial information today.
Now, that is not a crazy idea, but I think it still needs some work. The first issue that comes to mind is heterogeneity of the resulting data. One of the tricky parts of Phase II (and especially Phase III) trials is trying to make sure that all the patients, scattered as they often are across various trial sites, are really being treated and evaluated in exactly the same way. Grove's plan sort of swerves around that issue, in not-a-bug-but-a-feature style. I worry, though, that rather than getting away from his "tyranny of averages", that this might end up swamping things that could be meaningful clinical signals, losing them in a noisy pile of averaged-out errors. The easier the dosing protocols, and the more straighforward the clinical workup, the better it'll go for this method.
That leads right in to the second question: who decides which patients get tested? That's another major issue for any clinical program (and is, in fact, one of the biggest differences between Phase II and Phase III, as you open up the patient population). There are all sorts of errors to make here. On one end of the scale, you can be too restrictive, which will lead the regulatory agencies to wonder if your drug will have any benefit out in the real world (or to just approve you for the same narrow slice you tested in). If you make that error in Phase II, then you'll go on to waste your money in Phase III when your drug has to come out of the climate-controlled clinical greenhouse. But on the other end, you can ruin your chances for statistical significance by going too broad too soon. Monitoring and enforcing such things in a wide-open plan like Grove's proposal could be tough. (But that may not be what he has in mind. From the sound of it, wide-open is the key part of the whole thing, and as long as a complete medical history and record is kept of each patient, then let a thousand flowers bloom).
A few other questions: what, under these conditions, constitutes an endpoint for a trial? That is, when do you say "Great! Enough good data!" and go to the FDA for approval? On the other side, when do you decide that you've seen enough because things aren't working - how would a drug drop out of this process? And how would drugs be made available for the whole process, anyway? Wouldn't this favor the big companies even more, since they'd be able to distribute their clinical candidates to a wider population? (And wouldn't there be even more opportunities for unethical behavior, in trying to crowd out competitor compounds in some manner?)
Even after all those objections, I can still see some merit in this idea. But the details of it, which slide by very quickly in Grove's article, are the real problems. Aren't they always?
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs
September 27, 2011
Now here is a fascinating piece of work for anyone who's invested in the small pharma/biotech sector. The authors looked over the stocks of companies developing cancer therapies, ones that have had critical Phase III results or regulatory decisions announced over the past ten years. And they looked at the trading in their stocks, for 120 days before and after the announcements. What, do you suppose, did they discover in this exercise?
Uh-huh. You have surely guessed correctly:
The mean stock price for the 120 trading days before a phase III clinical trial announcement increased by 13.7% for companies that reported positive trials and decreased by 0.7% for companies that reported negative trials. . .Trends in company stock prices before the first public announcement differ for companies that report positive vs negative trials. This finding has important legal and ethical implications for investigators, drug companies, and the investment industry.
Indeed it does. Interestingly, the authors did not find such a split around announcements of FDA regulatory decisions, suggesting that insider trading there is not as big a problem compared to what goes on from inside the industry.
But wait - there's more, as they say in the infomercials. In a follow-up commentary on the article, Mark Ratain of Chicago and Adam Feuerstein of TheStreet.com (who certainly has seen his share of market shenanigans) find another striking disparity in the data:
This analysis demonstrated a remarkable difference between companies that had positive and negative announcements. Specifically, the median market capitalization was approximately 80-fold greater for the companies with positive trials vs companies with negative trials. . .Furthermore, there were no positive trials among the 21 micro-cap companies (ie, companies with less than $300 million market capitalization, whereas 21 of 27 studies reported by the larger companies analyzed (greater than $1 billion capitalization) were positive.
That makes sense, as they point out: these small-cap stocks had such low valuations for a reason: because investors thought that the drugs weren't going to work, and in most cases, no larger companies had been willing to put up money on them, either. The oncology Phase III success rate for larger companies is comparable to therapeutics areas in the rest of the industry; the Phase III success rate for micro-cap oncology companies is catastrophic.
+ TrackBacks (0) | Category: Business and Markets | Cancer | The Dark Side
There's an op-ed piece over at Pharmalot that I think that many readers here will find interesting. It's by Daniel Hoffman, formerly employed in pharma, it appears, and now a consultant. He's writing about the waves of layoffs the industry has experienced over the last few years, but he's not talking so much about the people who are gone, as the ones who are left:
In addition to disrupting tens of thousands of lives, the substantial downsizing in pharma over the past two-and-a-half years has changed many companies for the worse. I previously wrote that the guidelines handed down from finance to HR have eliminated many of the more knowledgeable and experienced people at each layoff round because people over age 50 are among the first targets for separation packages. But the dysfunctional legacy is even more pernicious. The resulting culture has created a workforce that is almost entirely at odds with what pharma needs now.
What sort of workforce is that? Hoffman's take is that the people who have survived under these conditions are disproportionately those who don't rock the boat, who keep their heads down, and who keep the top management as unperturbed as possible:
Many of the people remaining in operations deliberately choose not to ask big or important questions, lest their colleagues perceive any fundamental doubt as a threat. The truly adept manage to avoid taking a position on even the most mundane matters, lest someone else equate perceptive questions with disloyalty. Some even find it wise to feign ignorance concerning the elephants in various rooms. The combination of such simulated ignorance, together with the genuine version among the inexperienced survivors, makes the task of determining the smartest guy in the room a purely theoretical exercise.
I think that these are tendencies built in to most large organizations, but it wouldn't surprise me a bit if the shakeups of the last few years have exacerbated them. Many people, when the pressure is on as hard as it's been, decide that the first thing they have to do is try to hang on to their job. Anything interesting and risky can wait until after the mortgage payment has cleared and the tuition checks have been written. The behaviors most associated with "Don't get laid off" are not the ones that are best associated with "Keep the company going", much less "Discover something new". That last set of behaviors, in fact, might be one of the first to go, along with the people who exemplify them.
Hoffman has an aggressively cynical take on the motives in other parts of large organizations - and while I wish I could say that he's completely wrong, there are indeed places - too many - that operate on these general principles:
. . .At the top, finance sets the strategic direction. The goal of finance, paramount to everything else, consists of keeping senior management in control of the company. Forget the blather about shareholder value, customers, the community and medicine for the people. Everyone outside the boardroom is the enemy. . .Reality for CFOs involves long-term product and business development approaches that would create several quarters of flat or negative earnings. In their doomsday scenario, that would prompt the board to replace management.
And that's the tricky part of capitalism. One of the philosophical reasons that I'm such a free-market kind of person is that I think that it works with human nature as it really is, without needing any magical-thinking schemes to suddenly transform or improve it. People tend to act in their own self-interest? Fine, let's use that to try to derive benefit for more than just one person at a time. But it goes without saying (or should) that not all self-interested actions can be so harvested, which is why I'll never be anything close to an anarcho-libertarian.
Philosophy aside, what we're seeing in some drug organizations is this sort of self-destruction. The fix they find themselves in leads to behavior that makes the problems worse, or at best does little to overcome them. This, taken down to its individual basis, is what Hoffman's piece is arguing. And although his editorial can also be fairly characterized as a bitter rant, that doesn't mean it isn't true. Or at least more true than it should be.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
September 26, 2011
Predicting toxic drug effects in humans - now, that's something we could use more of. Plenty of otherwise viable clinical candidates go down because of unexpected tox, sometimes in terribly expensive and time-wasting ways. But predictive toxicology has proven extremely hard to realize, and it's not hard to see why: there must be a million things that can go wrong, and how many of them have we even heard of? And of the ones we have some clue about, how many of them do we have tests for?
According to Science, the folks at DARPA are soliciting proposals for another crack at the idea. The plan is to grow a variety of human cell lines in small, three-dimensional cultures, all on the same chip or platform, and test drug candidates across them. Here are the details. In keeping with many other DARPA initiatives, the goals are rather ambitious:
DARPA is soliciting innovative research proposals to develop an in vitro platform of engineered tissue constructs that reproduces the interactions that drugs or vaccines have with human physiological systems. The tissue constructs must be of human origin and engineered in such a way as to reproduce the functions of specific organs and physiological systems. All of the following physiological systems must be functionally represented on the platform by the end of the program: circulatory, endocrine, gastrointestinal, immune, integumentary, musculoskeletal, nervous, reproductive, respiratory, and urinary.
The request goes on to specify that these cell cultures need to be able to interact with each other in a physiologically relevant manner, that distribution and membrane barrier effects should be taken into account and reproduced as much as possible, and that the goal is to have a system that can run for up to four weeks during a given test. And they're asking for the right kinds of validation:
Proposers should present a detailed plan for validating integrated platform performance. At the end of each period of performance, performers are expected to estimate the efficacy, toxicity, and pharmacokinetics of one or more drugs/vaccines that have already been administered to humans. Proposers should choose test compounds from each of the four categories listed below based on published clinical studies. These choices should also be relevant to the physiological systems resident on the platform at the time of testing and should include at least one test compound that was thought to be safe on the basis of preclinical testing but later found to be toxic in humans.
i. Drugs/vaccines known to be safe and effective
ii. Drugs/vaccines known to be safe and ineffective
iii. Drugs/vaccines known to be unsafe, but effective
iv. Drugs/vaccines known to be unsafe and ineffective
Now, that project is going to keep some people off the streets and out of trouble, for sure. It's a serious engineering challenge, right off the bat, and there are a lot of very tricky questions to get past even once you've got those issue worked out. One of the biggest is which cells to use. You can't just say "Well, some kidney cells, sure, and some liver, yeah, can't do without those, and then some. . ." That's not how it works. Primary cells from tissue can just die off on you when you try to culture them like this, and if they survive, they (almost invariably) lose many of the features that made them special in their native environment. Immortalized cell lines are a lot more robust, but they've been altered a lot more, too, and can't really be taken as representative of real tissue, either. One possibility that's gotten a lot of attention is the use of induced stem cell lines, and I'd bet that a lot of the DARPA proposals will be in this area.
So, let's stipulate that it's possible - that's not a small assumption, but it's not completely out of the question. How large a test set would be appropriate before anyone puts such a system to serious use? Honestly, I'd recommend pretty much the entire pharmacopeia. Why not? Putting in things that are known to be trouble is a key step, but it's just as crucial that we know the tendency of such an assay to kill compounds that should actually get through. Given our failure rates, we don't need to lose any more drug candidates without a good reason.
We're not going to have to worry about that for a while, though. DARPA is asking for people to submit proposals for up to five years of funding, contingent on milestones, and I still cannot imagine that anyone will be able to get the whole thing working in that short a period. And I think that there's still no way that any system like this will catch everything, of course (and no one seems to be promising that, fortunately). A system sufficient to do that would be like building your own in vitro human, which is a bit out of our reach. No, I'd definitely settle for just an improved look into possible tox problems - every little bit will definitely help - but only if it doesn't set off too many false alarms in the process.
+ TrackBacks (0) | Category: Toxicology
September 23, 2011
I have just enough time today to link to this - which is simultaneously a nasty prank to pull on someone, and (for anyone who's been to grad school), completely hilarious. A message went out over a mail server list in Europe, after a post-doc position in Germany had been posted. It, um, clarified the nature of the position:
I am desperately searching for eager victims - postdocs or PhD students - mine or other supervisors' - to make my workhorses and to plunder ideas from. . .I cannot do research myself because I'm narrow-minded, rigid-brained, and petty. Therefore, I have to recruit desperate scientists from anywhere in the world and then manage (harangue) them into submission. The smarter you are relative to me, the more I will hate you. . .
It goes on in that vein for a while, winding up with the usual boilerplate legal language: "I am entitled to success because supremacy is my birthright".. Read it, cast you mind back to your own grad student/post-doc days, and imagine the temptation to do the same!
+ TrackBacks (0) | Category: Academia (vs. Industry)
September 22, 2011
As promised, today we have a look at a possible bombshell in longevity research and sirtuins. Again. This field is going to make a pretty interesting book at some point, but it's one that I'd wait a while to start writing, because the dust is hanging around pretty thickly.
Some background: in 1999, Sir2 the Guarente lab at MIT reported that Sir2 was a longevity gene in yeast. In 2001, theyextended Sir2 these results to C. elegans nematodes, lengthening their lifespan between 15 and 50% by overexpressing the gene. And in 2004, Stephen Helfand's lab at Brown reported similar results in Drosophila fruit flies. Since then, the sirtuin field has been the subject of more publications than anyone would care to count. The sirtuins are involved, it turns out, in regulating histone acetylation, which regulates gene expression, so there aren't many possible effects they might have that you can rule out. Like many longevity-associated pathways, they seem to be tied up somehow with energy homeostasis and response to nutrients, and one of the main hypotheses has been that they're somehow involved in the (by now irrefutable) life-extending effects of caloric restriction.
As an aside, you may have noticed that almost every news about something that extends life gets tied to caloric restriction somehow. There are two good reasons for that - one is, as stated, that a lot of longevity seems - reasonably enough - to be linked to metabolism, and the other one is that caloric restriction is by far the most solid of all the longevity effects that can be shown in animal models.
I'd say that the whole sirtuin story has split into two huge arguments: (1) arguments about the sirtuin genes and enzymes themselves, and (2) arguments about the compounds used to investigate them, starting with resveratrol and going through the various sirtuin activators reported by Sirtris, both before and after their (costly) acquisition by GlaxoSmithKline. That division gets a bit blurry, since it's often those compounds that have been used to try to unravel the roles of the sirtuin enzymes, but there are ways to separate the controversies.
I've followed the twists and turns of argument #2, and it has had plenty of those. It's not safe to summarize, but if I had to, I'd say that the closest thing to a current consensus is that (1) resveratrol is a completely unsuitable molecule as an example of a clean sirtuin activator, (2) the earlier literature on sirtuin activation assays is now superseded, because of some fundamental problems with the assay techniques, and (3) agreement has not been reached on what compounds are suitable sirtuin activators, and what their effects are in vivo. It's a mess, in other words.
But what about argument #1, the more fundamental one about what sirtuins are in the first place? That's what these latest results address, and boy, do they ever not clear things up. There has been persistent talk in the field that the original model-organism life extension effects were difficult to reproduce, and now two groups (those of David Gems and Linda Partridge) at University College, London (whose labs I most likely walked past last week) have re-examined these. They find, on close inspection, that they cannot reproduce them. The effects in the LG100 strain of C. elegans appear to be due to another background mutation in the dyf family, which is also known to have effects on lifespan. Another mutant strain, NL3909, shows a similar problem: its lifespan decreases on outcrossing, although the Sir2 levels remain high. A third long-lived strain, DR1786, has a duplicated section of its genome that includes Sir2, but knocking that down with RNA interference has no effect on its lifespan. Taken together, the authors say, the correlation of Sir2 with lifespan in nematodes appears to be an artifact.
How about the fruit flies? This latest paper reproduces the lifespan effects, but finds that they seem to be due to the expression system that was used to increase dSir2 levels. When the same system is used to overexpress other genes, lifespan is also increased. They then used another expression vector to crank up the fly Sir2 by over 300%, but those flies did not show an extension in lifespan, even under a range of different feeding conditions. They also went the other way, examining mutants with their sirtuin expression knocked down by a deletion in the gene. Those flies show no different response to caloric restriction, indicating that Sir2 isn't part of that effect, either - in direct contrast to the effects reported in 2004 by Helfand.
It's important to keep in mind that these aren't the first results of this kind. Others had reported problems with sirtuin effects on lifespan (or sirtuin ties to caloric restriction effects) in yeast, and as mentioned, this had been the stuff of talk in the field for some time. But now it's all out on the table, a direct challenge.
So how are the original authors taking it? Guarente, who to his credit has been right out in the spotlight throughout the whole story, has a new paper of his own, published alongside the UCL results. They partially agree, saying that there does indeed appear to be an unlinked mutation in the LG100 strain that's affecting lifespan. But they disagree that sirtuin overexpression has no effect. Instead of their earlier figure of 15 to 50%, they're claiming a 10 to 14% - not as dramatic, for sure, but the key part for the argument is that it's not zero.
And as for the fruit flies, Hefland at Brown is pointing out that in 2009, his group reported a totally different expression system to increase dSir2, which also showed longevity effects (see their Figure 2 in that link). This work, he's noting, is not cited in the new UCL paper, and from his tone in interviews, he's not too happy about that. That's leading to coverage from the "scientific feud!" angle - and it's not that I think that's inaccurate, but it's not the most important part of the story. (Another story with follow-up quotes is here).
So what are the most important parts? I'd nominate these:
1. Are sirtuins involved in lifespan extension, or not? And by that, I mean not only in model organisms, but are they subject to pharmacological intervention in the field of human aging?
2. What are the other effects of sirtuins, outside of aging? Diabetes, cancer, several other important areas touch on this whole metabolic regulation question: what are the effects of sirtuins in these?
3. What is the state of our suite of tools to answer these questions? Resveratrol may or may not do interesting things in humans or other organisms, but it's not a suitable tool compound to unravel the basic mechanisms. Do we have such compounds, from the reported Sirtris chemical matter or from other sources? And on the biology side, how useful are the reported overexpression and deletion strains of the various model organisms, and how confident are we about drawing conclusions from their behavior?
4. Getting more specific to drug discovery, are sirtuin regulator compounds drug candidates or not? Given the disarray in the basic biology, they're at the very least quite speculative. GlaxoSmithKline is the company most immediately concerned with this question, since they spent over $700 million to buy Sirtris, and have been spending money in the clinic ever since evaluating their more advanced chemical matter. And that brings up the last question. . .
5. What does GSK think of that deal now? Did they jump into an area of speculative biology too quickly? Or did they make a bold deal that put them out ahead in an important field?
I do not, of course, have answers to any of these. But the fact that we're still asking these questions ten years after the sirtuin story started tells you that this is both an important and interesting area, and a tricky one to understand.
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
September 21, 2011
This will be the subject of a longer post tomorrow, but I wanted to alert people to some breaking news in the sirtuin/longevity saga. It now appears that the original 2001 report of longevity effects of Sir2 in the C. elegans model, which was the starting gun of the whole story, is largely incorrect. That would help to explain the conflicting results in this area, wouldn't it? Topics for discussion in tomorrow's post will include, but not be limited to: what else do sirtuins do? Are those results reproducible? What can we now expect to come out of pharma research in the field? And what does GSK now think about its investment in Sirtris?
+ TrackBacks (0) | Category: Aging and Lifespan | Biological News
Now here's an odd reaction, done in an odd way. Organic chemists will all be familiar with the azide/acetylene cycloaddition to form triazoles. In its copper-catalyzed variant, it's become a sensation, and is used as a convenient linker to do all kinds of interesting things. The reverse reaction, taking a triazole back to the starting materials, just isn't feasible. If you heat up one of the triazoles enough to get it to do anything, which takes some pretty serious heat, it just gives you a handful of decomposition products.
But what if you grabbed each side of the ring and just pulled on it? A paper in Science does just that, though having polymeric chains attached. If you subject that to ultrasound, the cavitation bubbles that form are violent enough to pull and jerk the molecular chains around - and when they try that on a triazole-linked molecule, they can see reversion to the acetylene and the azide. This only happens with long-chain polymers - the effect increases with polymer molecular weight, and small-molecule analogs aren't cleaved at all. It also appears that the effect works best when the triazole is near the midpoint of the polymer, not out towards one end. These are just what you would expect for this sort of "mechanosynthesis", and strong evidence for the proposed effect.
This could lead to some rather unusual reactions being discovered. Some sort of cleavable tether that stands up under sonication might allow you to put on "mechanosynthetic handles" that you could then take off again, as if they were protecting groups. Silyl ethers, maybe? Which functional groups can take the stress, and which will pull apart to give something new?
+ TrackBacks (0) | Category: Chemical News
I've been meaning to mention this paper from John Hartwig (and co-worker Daniel Robbins), because it's just the sort of let's-find-something-new idea that I like. Hartwig has made a name in the field of organometallic catalysis, and is looking for new reactions. So how do you find new reactions?
Most published methods for the high-throughput discovery of catalysts evaluate one of the two catalyst-reactant dimensions. In other words, these methods have been used to examine either many catalysts for a single class of reaction or a single catalyst for many reactions. A two-dimensional approach in which many catalysts for many possible catalytic reactions are tested simultaneously would create a more efficient discovery platform if the reactants and products from such a system could be identified.
Well, this paper details a brutally straightforward technique for doing that. They take a list of seventeen reactants, all around the same rough molecular weight range, each of them with a single functional group. They put a mixture of all seventeen into every well of a 96-well plate. Then they take twelve ligands, dispensed one per column of the plate, and then they take eight different metal catalyst precursors and dispense those across the eight rows. And then they take the plate and heat it up.
Can't get much more straightforward than that, can you? But analyzing the wells by mass spec tells you some interesting things, and you can cover a lot of ground. Seventeen substrates, fifteen metal starting points, and 23 ligand (or lack of ligand) combinations gives you a look into tens of thousands of possible reactions. They simplified the mass spec analysis by combining samples for each row, then combining another site for each column, so you only have to run 20 samples per plate to give you the X-Y coordinated of a well that did something. A test plate containing some combinations of known catalytic reactions showed the expected products in the right wells - and it showed some other reactions, too.
Among those were several wells that indicated an alkyne/aniline addition reaction catalyzed by copper. This turned out to be a hydroamination reaction that no one had observed before. There was also a new product in several Ni-catalyzed wells - a set of deconvolution experiments narrowed that one down, and it turned out to be reaction of arylboronic acids with diphenylacetylene to give a triarylalkene - a reaction not previously catalyzed by such a cheap metal as nickel. And while most of the known reactions are syn, this one gives anti addition, with E/Z ratios that vary depending on the ligand used for the metal.
Not bad - two new reactions in what was, in the end, a pretty simple experiment. And any good chemist should be able to see the ways this protocol could be extended. For example:
This approach to reaction discovery holds considerable potential for purposes beyond those revealed in the current work. For example, this system could be used to explore reactions with additives, such as oxidants, reductants, acids, and bases, and to explore reactions of two substrates with a third component, such as carbon monoxide or carbon dioxide. It could also be used to examine the reactivity of a single class of ligand with various organic substrates and transition metal–catalyst precursors. Thus, we anticipate that this approach to reaction discovery will provide a general and adaptable platform suitable for use by a wide range of laboratories for the discovery of a variety of catalytic reactions.
There's going to be some criticism, though, that this is (a) obvious and (b) not elegant. I regard those as features, not bugs. Never be afraid of the obvious. And organometallic catalysis is so complicated that trying to elegantly reason your way right to the good parts is not always a productive use of your time. Do you want to look like a genius, or do you want to discover new chemistry?
+ TrackBacks (0) | Category: Chemical News
September 20, 2011
Continuing a sort of informal series here on China's research environment, a reader sent along this editorial from China Daily. That, of course, is an organ of the Chinese government, and its title is a rather pointed one: "Honest Research Needed":
An investigation by the Chinese Association of Scientists has revealed that only about 40 percent of the funds allocated for scientific research is used on the projects they are meant for. The rest is usually spent on things that have nothing to do with research. . .Besides, the degree of earnestness most scientists show in their research projects nowadays is questionable. Engaging in scientific research projects funded by the State has turned out to be an opportunity for some scientists to make money. There are examples of some scientists getting research funds because of their connections with officials rather than their innovation capacity.
This would seem to be part of a broader anticorruption movement on the part of the government, which (from all reports) is finding plenty of material to work with. But I still have to wonder how effective that's going to be. As long as it's the state that is the main source of funding, the source of all permissions, and the final judge on what's worthwhile, then corruption has both a strong incentive to exist and a clear leverage point from which to work.
If you subsidize something, you're going to get it. It may be that the Chinese government has subsidized, in too many cases, something that was billed as "research", without realizing that it was going to have those quotation marks around it.
+ TrackBacks (0) | Category: The Dark Side
I wrote last year about Foldit, a collaborative effort to work on protein structure problems that's been structured as an open-access game. Now the team is back with another report on how the project is going, and it's interesting stuff. The headlines have generally taken the "Computer Gamers Solve Incredible Protein Problem That Baffled Scientists!" line, but that's not exactly the full story.
The Foldit collaboration participated in the latest iteration of a regular protein-structure prediction challenge, CASP9. And their results varied - in the category of proteins with known structural homologs, for example, they didn't perform all that well. The players, it turned out, sort of over-worked the structures, and made a lot of unnecessary changes to the peripheral parts of the proteins. Another category took on proteins that have no identified structural homologs, a much harder problem. But that had its problems, too, which illustrate both the difficulties of the Foldit approach and protein modeling in general:
For prediction problems for which there were no identifiable homologous protein structures—the CASP9 Free Modeling category—Foldit players were given the five Rosetta Server CASP9 submissions (which were publicly available to other prediction groups) as starting points, along with the Alignment Tool. . .In this Free Modeling category, some of the shortcomings of the Foldit predictions became clear. The main problem was a lack of diversity in the conformational space explored by Foldit players because the starting models were already minimized with the same Rosetta energy function used by Foldit. This made it very difficult for Foldit players to get out of these local minima, and the only way for the players to improve their Foldit scores was to make very small changes ('tunneling' to the nearest local minimum) to the starting structures. However, this tunneling did lead to one of the most spectacular successes in the CASP9 experiment.
. . .the Rosetta Server, which carried out a large-scale search for the lowest-energy structure using computing power from Rosetta@home volunteers, produced a remarkably accurate model . . . However, the server ranked this model fourth out of the five submissions. The Foldit Void Crushers team correctly selected this near-native model and further improved it by accurately moving the terminal helix, producing the best model for this target of any group and one of the best overall predictions at CASP9 . . . Thus, in a situation where one model out of several is in a near-native conformation, Foldit players can recognize it and improve it to become the best model. Unfortunately for the other Free Modeling targets, there were no similarly outstanding Rosetta Server starting models, so Foldit players simply tunneled to the nearest incorrect local minima.
In the Refinement challenge, where participants take a minimized structure and try to improve its accuracy, the Foldit players had similar problems with starting from structures that had already been minimized by the same tools that they were using. Every change tended to make things look worse. The team improved their performance by reposting one of the structures as a new challenge, this time keeping the parts that were known with confidence to be near-native, while more or less randomizing the other parts to give a greater diversity to the starting points.
And those really are some of the key problems in this work. There are an awful lot of energy minima out there, and which ones you can get to depend crucially on where you start looking. In order to get to a completely different manifold of protein structures, even ones with much better energies, you may well have to go through a zone where you look like you're ruining everything. (And most of the time, you probably are ruining everything - there's no way to know if there's a safe haven on the other side or not).
But this paper also reports the results that are getting the headlines, a structure for the Mason-Pfizer monkey retroviral protease. This is an interesting protein, because although it crystallizes readily (in several different forms), and although the structures of other retroviral proteases are known, no one has been able to solve this one from the available X-ray data. The Foldit players, however, came up with several proposals that fit the data well enough for the structure to finally fall out of the diffraction data. It does have some odd features in its protein loops, different enough from the other proteases for no one to have hit on it before.
And that really is an accomplishment, and the way it was solved (with different players building on the results of others, competing to get the best optimization scores) really is the way the Foldit is supposed to work. Their less impressive performance on the CASP9 problems, though, shows that the same protein prediction difficulties apply to Foldit players as apply to the rest of the modeling field. This isn't a magic technique, and Foldit gamers are not going to rampage through the structural biology world solving all the extant problems any time soon. But it's nothing to sneeze at, either.
+ TrackBacks (0) | Category: In Silico | Press Coverage
September 19, 2011
Is it just me, or is this sort of. . .baffling?
GlaxoSmithKline (GSK) today announced that it has formed a long term strategic partnership with McLaren Group. The partnership, which will run initially until 2016, brings together two UK companies focused on innovation and high-tech research.
Well, yes, I suppose it does. But one of them makes drugs, and the other runs Formula I races. When you get down to the details, such as they are, you find this sort of thing:
A new state of the art learning facility will also be built as part of the agreement, focused on developing UK engineering skills and processes. Called the 'McLaren GSK Centre for Applied Performance', it will be located at McLaren’s Headquarters in Woking and open in 2013. Employees from both organisations and business partners will be able to use the facility to share ideas and collaborate on joint working projects.
Whatever those might be. I could sit back and make catty remarks for another paragraph or two - it's a temptation - but here's what's behind that impulse: while it's true that both companies are engaged in using technology, they're doing it in very different ways and to different ends. A racing company is working with very fast cars. The general principles of building very fast cars, though, are already known, and the question now is how to make them just a bit faster than the other people's. Testing any ideas and techniques that are developed is also relatively straightforward - you have static testing rigs, you have test tracks, you have numerous Formula I races every year, and all of these things give you direct feedback about just how well you're doing. I'm sure that the McLaren people are quite good at taking these results and turning things around quickly - thus all the talk in the press release about their fast, dynamic decision making.
But the drug discovery process is quite different. If we start out trying to make a Whateverase 3A inhibitor for Disease X, there is no assurance at all that such a compound can exist. There's usually not even as much assurance as you'd like that such a compound will do for Disease X what you think that it'll do - witness the clinical failure rates. And the process of finding, developing, and testing such a compound takes years - given all the problems that have to be solved, and the necessity of human trials, it cannot help but take years. The McLaren people are not faced with a ten-to-fifteen year wait before they can get a single car into a single race, and once there, do 90% of their cars fail to complete the course at all?
Let me try for a wider explanation, because this is all coming very close to what I'll call the Andy Grove Fallacy. The single biggest difference between the two types of R&D is this: McLaren is trying to optimize a technology that was discovered and developed by humans. GSK is trying to optimize against one that was not. Really, really not human, not done with human motives or with human understanding in mind. Living systems, I believe, are the only such technology we've ever encountered, and it's something to see. Billions of years of evolutionary tinkering have lead to something so complex and so strange that it can make the highest human-designed technology look like something built with sticks. To give Andy Grove a tiny break, the devices we've built in the IT industry (and the software used to run them) are the closest approximations, but they're really not very close, because we made them, and what human ingenuity can make, human ingenuity can understand. The body-temperature water-based molecular nanotechnology that's running us (and every other living thing on the planet) is something else again. And it comes with no documentation at all, other than what we can puzzle out ourselves, a process still very much incomplete.
So no, I don't think that a company that races cars can help GSK out all that much with the fundamental problems of its business. But I haven't seen that state-of-the-art learning facility, which will be ready in only a couple of years. We'll check back in and see how things are going.
+ TrackBacks (0) | Category: Business and Markets
September 16, 2011
I'll be wandering around London today - the med-chem part of my trip has finished up, and I'm taking a little time off before flying back to Boston later this weekend. I've already done some stereotypical London things (spending some quality time in the British Museum and having some very good Indian food), and will no doubt do others. Blogging (and the march of science) resumes on Monday!
+ TrackBacks (0) | Category:
September 15, 2011
A discussion with colleagues recently got me to wondering about this useful (albeit grim) question: what area of drug discovery over the last twenty years would you say has taken up the most resources and returned the least value? I'm thinking more of disease/therapeutic areas, but other nominations are welcome, of course.
My own candidate is the nuclear receptor field, where some of that time and effort was mine. When I think of how enthusiastic I was ten years ago, how impatient I was to get in there and start up a big effort to really understand what was going on, to dig into the details and come up with drug candidates - and then when I think of what happened to the people who actually did that, well, it's food for thought. For those outside the field, a vast amount of effort and treasure was spent trying to work out a lot of insanely complex biology, and well, not much has ever emerged. Things went toward the clinic and never got there. Things went into the clinic and never came back out. Some went all the way to the FDA and were turned down.
So that's my nominee. I ask this question not just to wallow in misery and schadenfreude, but to see if there are some trends that we can spot, so as to avoid such things the next time they come down the chute. Given the state of the industry, the last thing we need is another gigantic sinkhole of time and money, so a bit of early warning would be welcome.
+ TrackBacks (0) | Category: Drug Industry History
Back last year I did a brief post about how much not-so-exotic druglike chemical matter has never been explored. My example was substituting heteroatoms into the steroid nucleus - hard to get much more medicinally active than those, but most of the possible variations have never been made. Structurally they're right next door to things that have been known for decades, but they're largely unexplored (which is many cases is because they're not all that easy to make).
The RSC/SCI symposium called my attention to something in this exact class, abiraterone, a CYP17 inhibitor. This was discovered at the Institute for Cancer Research in London, and after several steps through the development world has ended up with J&J. It was approved by the FDA earlier this year for some varieties of prostate cancer.
So there's an example of a sorta-steroid making it all the way through. If intelligent (and oddly motivated) aliens landed tomorrow and forced me to use their advanced organic synthesis techniques to generate a library of unique structures with high hit rates in drug screens, I think I might ask them if they knew how to scatter basic amines, ethers, sulfonamides and so on in and around the steroid nucleus. I offer that advice free of charge to any readers who might find themselves in a similar situation.
Update: as per the comments, compare Cortistatin A for another, more highly modified steroid nucleus with an aromatic heterocycle hanging off it.
+ TrackBacks (0) | Category: Cancer | Chemical News
September 14, 2011
I'm still at the RSC/SCI symposium in Cambridge, and a talk yesterday by Marta Pineiro-Nuñez gives me a chance to update this post about Eli Lilly's foray into opening up its screening to outside collaboration. That effort has been working away for the last two or three years, and the company is now revealing some details about how it's been going.
The original plan was to allow people to put compounds into a set of Lilly phenotypic screens. No structures would be revealed, and the company would have "first rights of negotiated access" for any interesting hits. They have a new web gateway for the whole thing, since now they've added several target-based screens to the process. As mentioned in the earlier post, they've come up with a universal Material Transfer Agreement to bring the compounds in, but Pineiro-Nuñez said that this was still a bit of a struggle at first. Small companies were pretty open to the idea, she said, but there were some suspicious responses from academia, with a lot of careful digging through the MTA to make sure that they wouldn't be giving away too much.
But things seem to have gotten going pretty well. According to the presentation, Lilly has 252 affiliations in 27 countries. That breaks down as 174 academic partners and 78 small companies. About 42,000 compounds have been accepted for screening - that's after a firewalled computational screen of the structures to eliminate nasty functional groups and the like. About 40% of the submissions fail the suitability screens, but the single biggest reason is lack of structural novelty - too close to marketed drugs, too close to controlled substances, or too close to things that are already in Lilly's files.
Here's a recent overview of the screening results. In the end, 115 structures were requested for disclosure, and 97 of those ended up being shared with Lilly, who still wanted 13 of them after looking them over. And those have (so far) led to two recent signed collaborations, with one more set to go and two others still in negotiations. The compounds certainly aren't instant clinical candidates, but have been interesting enough to put money on. And so far, the initiative is seen as successful, enough to expand it to more assays.
It'll be interesting to see if more companies try this out. It would seem especially suited for unusual proprietary assays that might be hiding behind industrial walls. Having Lilly demonstrate that a model of this sort can actually work in practice should help - congratulations to them for putting the work in to make it happen.
+ TrackBacks (0) | Category: Drug Assays
September 13, 2011
In response to several queries, my talk today went fine, as far as I can tell. I brought in some thoughts from Tyler Cowen's The Great Stagnation, among other sources, and tried to tie some widely scattered thoughts together. And since I've also had some requests from readers for the slides, I'm thinking about turning the thing into a video and just putting it up on YouTube for anyone who wants to see it. Of course, that'll mean that I'll have to come up with *another* talk next time someone invites me to a conference, but still. . .
+ TrackBacks (0) | Category: Blog Housekeeping
I wanted to send people to this 50-year retrospective in J. Med. Chem.. It's one of those looks through the literature, trying to see what kinds of compounds have actually been produced by medicinal chemists. The proxy for that set is all the compounds that have appeared in J. Med. Chem. during that time, all 415, 284 of them.
The idea is to survey the field from a longer perspective than some of the other papers in this vein, and from a wider perspective than the papers that have looked at marketed drugs or structures reported as being in the clinic. I'm reproducing the plot for the molecular weights of the compounds, since it's an important measure and representative of one of the trends that shows up. The prominent line is the plot of mean values, and a blue square shows that the mean for that period was statistically different than the 5-year period before it (it's red if it wasn't). The lower dashed line is the median. The dotted line, however, is the mean for actual launched drugs in each period with a grey band for the 95% confidence interval around it.
As a whole, the mean molecular weight of a J. Med. Chem. has gone up by 25% over the 50-year period, with the steeped increase coming in 1990-1994. "Why, that was the golden age of combichem", some of you might be saying, and so it was. Since that period, though, molecular weights have just increased a small amount, and may now be leveling off. Several other measures show similar trends.
Some interesting variations show up: calculated logP, for example, was just sort of bouncing around until 1985 or so. Then from 1990 on, it started a steep increase, and it's hard to tell if that's leveling off or not even now. At any rate, the clogP of the literature compounds has been higher than that of the launched drugs since the mid-1980s. Another point of interest is the fraction of the molecules with tetrahedral carbons. What you find is that "flatness" in the literature compounds held steady until the early 1990s (by which point it was already disconnected from the launched drugs), but since then it's gotten even worse (and further away from the set of actual drugs). This, as the authors speculate, is surely due to metal-catalyzed couplings taking over the world - you can see the effect right in front of you, and so far, the end is not in sight.
Those two measures are the ones moving the most outside the range of marketed drugs. And despite my shot at early combichem molecules, it's also clear that publication delays mean that some of these things were already happening even before that technique became fashionable (although it certainly revved up the trends). Actually, if you want to know When It Changed in medicinal chemistry, you have to go earlier:
It is worth noting that these trends seemed to accelerate in the mid-1980s, indicating that some change took place in the early 1980s. The most likely explanations for an upward change in the early 1980s (before the age of combinatorial chemistry or high-throughput screening) seem to be advances in molecular biology, i.e., understanding of receptor subtypes leading to concerns about speciﬁcity; target-focused drug design and its corresponding one-property-at-a-time optimization paradigm (possibly exacerbated by structural biology); and improvements in technologies which enabled the synthesis and characterization of more complex molecules.
Target-based drug design, again. I'm really starting to wonder about this whole era. And if you'd told me back in, say, 1991 about these doubts that I'd be having, I'd have been completely dumbfounded. But boy, do I ever have them now. . .
+ TrackBacks (0) | Category: Chemical News | Drug Industry History | Life in the Drug Labs
September 12, 2011
Well, actually, this might not be an anomaly. Medicinal chemists will have heard of the "magic methyl" effect, where small changes can make a big difference in affinity for a drug candidate. This morning I heard an interesting talk by Phil Sanderson of Merck on allosteric Akt inhibitors for cancer. I won't go into all the kinase-ness, although it was definitely worth hearing about. What caught my eye was something he mentioned at the end of the talk. The first compound below was an early screening hit in their work, something that had been in Merck's files since the early 1970s. After a huge amount of work over many years, which you can follow though the literature if you like with a search for "allosteric" and "Akt", they found that four-membered rings were very useful in the structures. Going back to the original structure and adding that same modification to it improved its potency by roughly 100-fold.
One methylene group! You wonder what might have happened if they'd done that early in the project, but as Sanderson correctly noted, no one would have done that (it's synthetically tricky; no one would have put in the time). And they don't have any structural information that seems to explain this effect, he says. So if you're looking for an illustration of what makes medicinal chemistry the wild ride it is, you've got an excellent one here.
+ TrackBacks (0) | Category: Life in the Drug Labs
It seems that the credibility of the scientific literature has been taking a beating recently. This has come about for several reasons, and through several different motivations. I'll get one of the most important out of the way first - politics. While this has been a problem for a long time, there's been a really regrettable tendency in US politics the last few years, a split across broadly left/right lines. Cultural and policy disagreements have led to many on the left claiming the Dispassionate Endorsement of Settled Science, while others on the right end up complaining that it's nothing of the sort, just political biases given a quick coat of paint. Readers will be able to sort several ongoing controversies into that framework.
Political wrangling keeps adding fuel to the can-we-trust-the-literature argument, but it would still be a big issue without it. Consider the headlines that the work of John Ioannidis draws. And there's the attention being paid to the number of retractions, suspicions of commercial bias in the medical literature, the problems of reproducibility of cutting-edge results, and to round it all off, several well-publicized cases of fraud. No, even after you subtract the political ax-grinding, there's a lot of concern left over (as there should be). There are some big medical and public policy decisions to be made based on what the scientific community has been able to figure out, so the first question to ask is whether we've really figured these things out or not.
A couple of recent articles prompted me to think about all this today. The Economist has a good overview of the Duke cancer biomarker scandal, with attention to the broader issues that it raises. And Ben Goldacre has this piece in The Guardian, highlighting this paper in Nature Neuroscience. It points out that far too many papers in the field are using improper statistics when comparing differences-between-differences. As everyone should realize, you can have a statistically significant effect under Condition A, and at the same time a lack of a statistically significant effect under Condition B on the same system. But that doesn't necessarily mean that the difference between using Condition A versus Condition B is statistically significant. You need to go further (usually ANOVA) to be able to say that. The submission guidelines for Nature Neuroscience itself make this clear, as do the guidelines for plenty of other journals. But it appears that a huge number of authors go right ahead and draw the statistically invalid comparison anyway, which means that the referees and editors aren't catching it, either. This is not the sort of thing that builds confidence.
So the questions about the reliability of the literature are going to continue, with things like this to keep everyone slapping their foreheads. One can hope that we'll end up with better, more reliable publications when all this is over. But will it ever really be over?
+ TrackBacks (0) | Category: The Scientific Literature
September 10, 2011
I wanted to mention that I'm going to be attending this med-chem symposium in the UK next week, a joint conference of the Royal Society of Chemistry and the Society for Chemical Industry.
I'll be giving a talk on Tuesday, which will probably be the only one without a single chemical structure in it. They've asked me to talk about the state of drug discovery, which is certainly a topic I've given a lot of thought to. Problem is, everyone else has given a lot of thought to it as well, so the challenge is to come up with something worthwhile to say. I'm dragging in some material from well outside the field, which I think (and hope) is relevant.
Blogging should continue while I'm gone, though. I'll be commenting on some of the things that I hear about at the conference, although I won't be live-blogging anyone's talk or anything. I look forward to meeting some interesting people, and since I've never been to Cambridge (or Oxford, another stop on the trip), that should be worthwhile all by itself. . .
+ TrackBacks (0) | Category: Blog Housekeeping
September 9, 2011
Can this possibly be accurate? There are photos going around of what is purported to be the inside of the Harbin Sixth Pharmaceutical Plant in northeast China, and it's easy to see why people are interested. They look like a Louis Quatorze-themed casino, a style one might call Vegas Versailles - bizarrely, crazily, relentlessly sumptuous. But don't take my word for it. Have a look:
There, as Dan Akroyd used to say, that wasn't so good now, was it? A little de trop for a state-owned enterprise, hm? You can find more views here, if you need them, but I can assure you that these are representative of the set. My own company is working on a new building; perhaps there's still time for us to hire these folks to do the interior.
When a correspondent sent these to me, my first thought was that this was some sort of Snopesworthy email legend. But perhaps not - Xinhua has picked it up, which makes one think that these shots are either (a) real or (b) something the Chinese government wishes to treat as real. Several stories also quote the weibo (Chinese Twitter/Tumblr-style microblog) of a journalist for state TV, Li Xiaomeng (李小萌), who has apparently also helped spread word of this throughout the Chinese online world.
The only clarification I can find so far is from an AFP story on the matter (and they must have been particularly amused/appalled by all that Sun King styling). This quotes a Beijing business newspaper confirming the photos as authentic, but that an official of the company has claimed that no, these aren't the offices, but the interior of an "art museum" that was constructed in the same new building. But that doesn't match up with other photos of the museum, apparently, so it's hard to say what's going on.
Except, of course, that some batch of lunatics considers this to be an appropriate use of public funds in Harbin (or anywhere). And that's worth thinking about. The connection with a pharmaceutical plant gives me the opportunity to talk about some things that have come up in conversation with various Chinese co-workers recently. The gigantic construction boom in China is well known. But this building, if it's anything like what it's purported to be, can serve as an illustration of the crazy aspect of the whole business. It looks from the outside as if China, in its attempt to come roaring to the top of the 21st-century league tables, is in serious danger of going off the rails. I get the impression that the government is committed to cranking up the GDP figures by any means necessary, and has decided that construction and infrastructure are the quickest and surest means to that end. Private real estate developers are thrilled to assist in this process, as are the owners of building companies and everyone else connected to the business. Need I add that huge construction contracts are, in every country and in every era, a notoriously easy way to hide kickbacks, payoffs, and corruption of every kind?
So I fear that China's incentives are misaligned. You get what you subsidize, and they're subsidizing a huge wave of construction. But how necessary are all these things? And how well are they being built? Will they all start falling apart ahead of schedule, and all at roughly the same time? And how much is being skimmed off the top during the whole process?
+ TrackBacks (0) | Category: Business and Markets
September 8, 2011
Here's another article in the Guardian that makes some very good points about the way we judge scientific productivity by published papers. My favorite line of all: "To have "written" 800 papers is regarded as something to boast about rather than being rather shameful." I couldn't have put it better, and I couldn't agree more. And this part is just as good:
Not long ago, Imperial College's medicine department were told that their "productivity" target for publications was to "publish three papers per annum including one in a prestigious journal with an impact factor of at least five.″ The effect of instructions like that is to reduce the quality of science and to demoralise the victims of this sort of mismanagement.
The only people who benefit from the intense pressure to publish are those in the publishing industry.
Working in industry feels like more of a luxury than ever when I hear about such things. We have our own idiotic targets, to be sure - but the ones that really count are hard to argue with: drugs that people will pay us money for. Our customers (patients, insurance companies, what have you) don't care a bit about our welfare, and they have no interest in keeping our good will. But they pay us money anyway, if we have something to offer that's worthwhile. There's nothing like a market to really get you down to reality.
+ TrackBacks (0) | Category: Academia (vs. Industry) | The Scientific Literature
September 7, 2011
We've talked here before about the structural class known as rhodanines - the phrase "polluting the scientific literature" has been used to describe them, since they rather promiscuously light up a lot of drug target assays, and almost never to any useful effect.
Well, guess what? Now there's an even easier way to make them! And says this new paper in the Journal of Organic Chemistry:
5-(Z)-Alkylidene-2-thioxo-1,3-thiazolidin-4-ones (rhodanine derivatives) were prepared by reaction of in situ generated dithiocarbamates with recently reported racemic α-chloro-β,γ-alkenoate esters. This multicomponent sequential transformation performed in one reaction flask represents a general route to this medicinally valuable class of sulfur/nitrogen heterocycles. Using this convergent procedure, we prepared an analogue of the drug epalrestat, an aldose reductase inhibitory rhodanine.
Sequentially linking several different components in one reaction vessel has been studied intensively as a rapid way to increase molecular complexity while avoiding costly and environmentally unfriendly isolation and purification of intermediates.(1-4) Such efficient multicomponent reactions, such as the Ugi reaction, often produce privileged scaffolds of considerable medicinal value. Rhodanines (2-thioxo-1,3-thiazolidin-4-ones) are five-membered ring sulfur/nitrogen heterocycles some of which have antimalarial, antibacterial, antifungal, antiviral, antitumor, anti-inflammatory, or herbicidal activities. . .In conclusion, convergent syntheses of N-alkyl 5-(Z)-alkylidene rhodanine derivatives have been achieved using recently reported racemic α-chloro-β,γ-alkenoate ester building blocks. The formation of these rhodanine derivatives involves a three-step, one-flask protocol that provides quick access to biologically valuable sulfur–nitrogen heterocycles.
Just what we needed. Now it's only going to be a matter of time before someone makes and sells a library of these things, and we can all get to see them again as screening hits in the literature.
+ TrackBacks (0) | Category: Chemical News | Drug Assays
Nature News has a good piece on the 24-hour laboratory. And they're not talking about automation; they're talking about grad students and post-docs staying around all night long. Interestingly, they're focusing on a specific lab to get the details:
But these members of neurosurgeon Alfredo Quiñones-Hinojosa's laboratory are accustomed to being the last out of the building. In a lab where the boss calls you at 6 a.m., schedules Friday evening lab meetings that can stretch past 10 p.m., and routinely expects you to work over Christmas, sticking it out until midnight on a holiday weekend is nothing unusual.
Many labs are renowned for their intense work ethic and long hours. When I set out to profile such a laboratory, I wanted to find out who is drawn to these environments, what it is really like to work there and whether long hours lead to more or better science. I approached eleven laboratories with reputations for being extremely hard-working. Ten principal investigators turned me down, some expressing a fear of being seen as 'slave-drivers'.
Number eleven — Quiñones-Hinojosa — had no such qualms. His work ethic is no secret. . (He) is gregarious and charming, with an infectious energy and a habit of advertising his humility. But he also knows how intimidating he can be to the people who work for him, and he's not afraid to capitalize on that. In 2007, just two years after he started at Hopkins, he rounded a corner in the cafeteria and saw his lab members sitting at a table, talking and laughing. When they caught sight of him, he says, they stopped, stood up, and went straight back to the lab.
I think that most of us in chemistry have either worked for, or worked near, someone who ran their lab like that. The article makes a point of showing how this professor tries to select people for his group who either like or will put up with it - and as long as everyone knows the score going in, I suppose that I don't have a problem with it. No one's forcing you to go work for Quiñones-Hinojosa, after all (but if you do, he'll certainly force you to work once you're there!) I would personally not make the choice to enter a lab like that one, but others might regard it as a worthwhile trade.
But there's the larger question of whether science has to (or even should) be done that way. As the article goes on to say:
But not everyone agrees that more hours yield more results. Dean Simonton, a psychology researcher at the University of California, Davis, who has studied scientific creativity, says that the pressure for publications, grants and tenure may have created a single-minded, "monastic" culture in science. But some research suggests that highly creative scientists tend to have broader interests and more hobbies than their less creative colleagues, he says. Chemist Stephen Buchwald of the Massachusetts Institute of Technology urges the members of his lab to take a month's holiday every year, and not to think about work when they're gone. "The fact is, I want people to be able to think," he says. "If they're completely beaten down, they're not going to be very creative."
My guess is that we're looking at two different kinds of science here. If you're in an area where there's a huge amount of data to be gathered and processed (as there is with the tumor samples in Quiñones-Hinojosa's lab), then the harder you crank, the more results come out the other end. They know what they have to do, and they've decided to do it twenty hours a day. On the other hand, you can't put in those hours thinking up the next revolutionary idea. "Be creative! Now! And don't stop being creative until midnight! I want to see a new electrophilic bond-forming reaction idea before you hit that door!" It doesn't work.
Robert Root-Bernstein's Discovering goes into some of these questions. It's an odd book that I recommend once in a while (much of it is in the form of fictional conversation), but it brings together a lot of information on scientific discovery and creativity that's very hard to find. A quote in it from J. J. Thomson seems appropriate here:
"If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible results being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate, different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want this kind of research, but if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is pay him for doing something else and give him enough leisure to do research for the love of it."
So there's room for both kinds of work (or should be). Just make sure that you know if you're getting into a pressure cooker beforehand, and that that's what you want or need to do. And if you're going to try to big creative route, you'd better have some interesting ideas to start from and some mental horsepower to work with, or you could spend the rest of your career wandering around in circles. . .
+ TrackBacks (0) | Category: Who Discovers and Why
September 6, 2011
And the well-known chem-blogger Milkshake knows how to serve it. See his latest post here. It doesn't qiote make up for having one's company bought out, having everyone moved and fired and hosed around, and having to go to court for the severance package that you were promised. . .but you have to take your pleasures where you can.
+ TrackBacks (0) | Category: Drug Industry History | Patents and IP
I've written a bit about the struggles to find the biological causes of chronic fatigue syndrome - but perhaps I should shut up? That seems to be the wiser course, given what's reported in this piece from the UK:
The full extent of the campaign of intimidation, attacks and death threats made against scientists by activists who claim researchers are suppressing the real cause of chronic fatigue syndrome is revealed today by the Observer. According to the police, the militants are now considered to be as dangerous and uncompromising as animal rights extremists.
One researcher told the Observer that a woman protester who had turned up at one of his lectures was found to be carrying a knife. Another scientist had to abandon a collaboration with American doctors after being told she risked being shot, while another was punched in the street. All said they had received death threats and vitriolic abuse.
The crime these people have committed, according to the various unhinged activists, is that they're suggesting that there could perhaps be a psychological component to the condition, or even just that the various proposals put forth for a viral cause don't seem to be holding up well. And we jump from that to death threats, harassment, calls for defunding, and accusations of dark deeds underwritten by Evil Pharmaceutical Companies.
That last one is especially weird, as one of the interviewees in the article makes clear. If there were a definite viral cause for chronic fatigue and allied syndromes, we Evil Pharma Scientists would do what we've done so evilly for HIV, hepatitis, and other diseases: come up with drugs to treat people or (better yet) vaccines to try to keep anyone from ever getting the disease again. Dark stuff indeed.
+ TrackBacks (0) | Category: Infectious Diseases | The Dark Side
September 2, 2011
So, are half the interesting new results in the medical/biology/med-chem literature impossible to reproduce? I linked earlier this year to an informal estimate from venture capitalist Bruce Booth, who said that this was his (and others') experience in the business. Now comes a new study from Bayer Pharmaceuticals that helps put some backing behind those numbers.
To mitigate some of the risks of such investments ultimately being wasted, most pharmaceutical companies run in-house target validation programmes. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. Talking to scientists, both in academia and in industry, there seems to be a general impression that many results that are published are hard to reproduce. However, there is an imbalance between this apparently widespread impression and its public recognition. . .
Yes, indeed. The authors looked back at the last four years worth of oncology, women's health, and cardiovascular target validation efforts inside Bayer (this would put it right after they combined with Schering AG of Berlin). They surveyed all the scientists involved in early drug discovery in those areas, and had them tally up the literature results they'd acted on and whether they'd panned out or not. I should note that this is the perfect place to generate such numbers, since the industry scientists are not in it for publication glory, grant applications, or tenure reviews: they're interested in finding drug targets that look like they can be prosecuted, in order to find drugs that could make them money. You may or may not find those to be pure or admirable motives (I have no problem at all with them, personally!), but I think we can all agree that they're direct and understandable ones. And they may be a bit orthogonal to the motives that led to the initial publications. . .so, are they? The results:
"We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings. In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects. . ."
So Booth's estimate may actually have been too generous. How does this gap get so wide? The authors suggest a number of plausible reasons: small sample sizes in the original papers, leading to statistical problems, for one. The pressure to publish in academia has to be a huge part of the problem - you get something good, something hot, and you write that stuff up for the best journal you can get it into - right? And it's really only the positive results that you hear about in the literature in general, which can extend so far as (consciously or unconsciously) publishing just on the parts that worked. Or looked like they worked.
But the Bayer team is not alleging fraud - just irreproducibility. And it seems clear that irreproducibility is a bigger problem than a lot of people realize. But that's the way that science works, or is supposed to. When you see some neat new result, your first thought should be "I wonder if that's true?" You may have no particular reason to doubt it, but in an area with as many potential problems as discovery of new drug targets, you don't need any particular reasons. Not all this stuff is real. You have to make every new idea perform the same tricks in front of your own audience, on your own stage under bright lights, before you get too excited.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development
September 1, 2011
Several readers sent along this article from the Times of London (via the Ottawa Citizen) on GlaxoSmithKline's current research setup. You can tell that the company is trying to get press for this effort, because otherwise these are the sorts of internal arrangements that would never be in the newspapers. (The direct quotes from the various people in the article are also a clear sign that GSK wants the publicity).
The piece details the three-year cycle of the company's Drug Performance Units (DPUs), which have to come and justify their existence at those intervals. We're just now hitting the first three-year review, and as the article says, not all the DPUs are expected to make it through:
In 2008, the company organized its scientists into small teams, some with just a handful of staff, and set them to work on different diseases. At the time, every one of these drug performance units (DPUs) had to plead its case for a slice of Glaxo’s four-billion-pound research and development budget. Three years on and each of the 38 DPUs is having to plead its case for another dollop of funding to 2014. . .
. . .Such a far-reaching overhaul of a fundamental part of the business has proved painful to achieve. Witty said: “If you look across research and development at Glaxo, I would say we are night-and-day different from where we were three, four, five years ago. It has been a tough period of change and challenge for people in the company. When you go through that period, of course there are moments when morale is challenged and people are worried about what will happen.”
But he said it has been worth the upheaval: “The research and development organization has never been healthier in terms of its performance and in terms of its potential.”
I'm not in a position to say whether he's right or not. One problem (mentioned by an executive in the story) is that three years isn't really long enough to say whether things are working out or not. That might give you a read on the number of preclinical projects, whether that seems to be increasing or not. But that number is notoriously easy to jigger around - just lower the bar a bit, and your productivity problem is solved, on paper. The big question is the quality of those compounds and projects, and that takes a lot more time to evaluate. And then there's the problem that the extent that you can actually improve that quality may still not be enough to really affect your clinical failure rates much, anyway, depending on the therapeutic area.
Is this a sound idea, though? It could be - asking projects and therapeutic areas to justify their existence every so often could keep them from going off the rails and motivate them to produce results. Or, on the other hand, it could motivate them to tell management exactly what they want to hear, whether that corresponds to reality or not. All of these tools can cut in both directions, and I've no idea which way the blades are moving at GSK.
There's another consideration that applies to any new management scheme. How long will GSK give this system? How many three-year cycles will be needed to really say if it's effective, and how many will actually be run? Has any big drug company kept its R&D arrangements stable for as long as nine years, say, in recent history?
+ TrackBacks (0) | Category: Drug Development | Drug Industry History